Python Data Platform Engineer
Apply now »Date: Mar 24, 2025
Location: Bangalore, KA, IN
Company: NTT DATA Services
Req ID: 318492
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.
We are currently seeking a Python Data Platform Engineer to join our team in Bangalore, Karnātaka (IN-KA), India (IN).
Job Duties: Team Overview
The Controls Engineering, Measurement and Analytics (CEMA) department is responsible for Cyber Risk and Control assessment, management, monitoring, and reporting capabilities across Technology, resulting in risk reduction and better oversight of the technology risk landscape of the firm. Our work is always client focused, our engineers are problem-solvers and innovators. We seek exceptional technologists to help deliver solutions on our user-facing applications, data stores, and reporting and metric platforms while being cloud-centric, leveraging multi-tier architectures and aligned with our DevOps and Agile strategies.
We are in the process of modernizing our technology stack across multiple platforms with the goal of building scalable, front-to-back assessment, measurement and monitoring systems using the latest cloud, web, and data technologies. We are looking for someone with a systematic problem-solving approach, coupled with a sense of ownership and drive. The successful candidate will be able to influence and collaborate globally. They should be a strong team-player, have an entrepreneurial approach, push innovative ideas while appropriately considering risk, and adapt in a fast-paced changing environment.
Role Summary
As an ETL / Data Engineer, you will be a member of the CEDAR / C3 Data Warehouse team, with a focus on sourcing and storing data from various technology platforms across the firm into a centralized data platform used to build various reporting and analytics solutions for the Technology Risk functions within Morgan Stanley. In this role you will be primarily responsible for the development of data pipelines, database views, and stored procedures, in addition to performing technical data analysis, and monitoring and tuning queries and data loads. You will be working closely with data providers, data analysts, data developers, and data analytics teams to facilitate the implementation of client-specific business requirements and requests.
KEY RESPONSIBILITIES:
• To develop ETLs, stored procedures, triggers, and views on our existing DB2-based Data Warehouse and on our new Snowflake-based Data Warehouse.
• To perform data profiling and technical analysis on source system data to ensure that source system data can be integrated and represented properly in our models.
• To monitor the performance of queries and data loads and perform tuning as necessary.
• To provide assistance and guidance during the QA & UAT phases to quickly confirm the validity of potential issues and to determine the root cause and best resolution of verified issues.
Minimum Skills Required: • Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or related field required.
• At least 5+ years of experience in data development and solutions in highly complex data environments with large data volumes.
• At least 5+ years of experience developing complex ETLs with Informatica PowerCenter.
• At least 5+ years of SQL / PLSQL experience with the ability to write ad-hoc and complex queries to perform data analysis.
• At least 5+ years of experience developing complex stored procedures, triggers, MQTs and views on IBM DB2.
• Experience with performance tuning DB2 tables, queries, and stored procedures.
• An understanding of E-R data models (conceptual, logical, and physical).
• Strong understanding of advanced data warehouse concepts (Factless Fact Tables, Temporal \ Bi-Temporal models, etc.).
• Experience with Python a plus.
• Experience with developing data transformations using DBT a plus.
• Experience with Snowflake a plus.
• Experience with Airflow a plus.
• Experience with using Spark (PySpark) for data loading and complex transformations a plus.
• Strong analytical skills, including a thorough understanding of how to interpret customer business requirements and translate them into technical designs and solutions.
• Strong communication skills both verbal and written. Capable of coll
About NTT DATA
NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com
NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.
Job Segment:
Data Warehouse, Cloud, Computer Science, Consulting, Database, Technology