Data Engineer - Streaming (WkStream 2 - Kafka)

Apply now »

Date: Apr 1, 2026

Location: Bangalore, KA, IN

Company: NTT DATA Services

Req ID: 364926 

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now.

We are currently seeking a Data Engineer - Streaming (WkStream 2 - Kafka) to join our team in Bangalore, Karnātaka (IN-KA), India (IN).

Job Duties: NTT DATA is seeking a Streaming Integration Engineer to own the two streaming ingestion workstreams of the PNC Bank Hadoop-to-Iceberg POC. This role is responsible for designing and delivering production-grade PySpark Structured Streaming pipelines that ingest data into Apache Iceberg tables — operating under specific technical constraints. For example, workstream 2 requires building a Confluent Kafka-to-Iceberg ingestion application using only Apache-supported APIs. PNC will not permit the use of the unsupported Confluent Iceberg Sink Connector. Additionally, workstream 3 requires delivering a syslog-ng-to-Iceberg batch ingestion pipeline via rolling log files, as syslog-ng has no native Iceberg sink. 

 

The engineer will work closely with GitHub CoPilot to scaffold, iterate, test, and document the streaming application code — acting as the technical reviewer and subject matter expert who ensures AI-generated pipelines are production-ready, PNC-compliant, and correctly integrated with the Iceberg catalog and Protegrity tokenization layer. 

Workstream 2 – Confluent Kafka to Iceberg 

Design and implement a PySpark Structured Streaming application that reads from Confluent Kafka topics, parses JSON and Avro payloads, applies schema mappings, and writes atomically to Iceberg tables using the Iceberg Spark runtime and foreachBatch micro-batch pattern 

Ensure all functionality relies exclusively on public Apache-supported APIs — Apache Spark, Apache Kafka, and Apache Iceberg — with no unsupported Confluent connectors or proprietary sinks 

Configure Kafka source parameters: bootstrap servers, consumer group IDs, offset management (startingOffsets, failOnDataLoss), checkpoint paths, and trigger intervals 

Implement PII detection and Protegrity tokenization hooks within the ingestion pipeline before data lands in the Iceberg Bronze layer 

Write comprehensive unit and integration tests: row count validation, schema conformance checks, Kafka offset commit verification, and data comparison against the source topic 

Support PNC UAT — walk PNC engineers through the code, demonstrate no unsupported connectors are used, and address review findings

Minimum Skills Required: Apache Kafka – Producer & Consumer 

4+ years of hands-on experience with Apache Kafka, including both producer and consumer development in PySpark, Java, or Scala 

Deep understanding of Kafka internals: topics, partitions, consumer groups, offsets, rebalancing, and exactly-once delivery semantics 

Experience with Confluent Kafka: schema registry, Avro/JSON serialisation, and Confluent Cloud or on-prem cluster configuration 

Proven ability to build ingestion pipelines without relying on unsupported or third-party sink connectors — using only native Kafka consumer APIs and Spark integration 

Familiarity with Kafka Connect architecture to evaluate trade-offs and articulate why application-level ingestion is preferred in constrained environments 

 

PySpark Structured Streaming 

Strong practical experience with PySpark Structured Streaming: Kafka source, file source, foreachBatch, output modes (append/update/complete), and checkpoint management 

Experience tuning streaming micro-batch trigger intervals, watermarking, and late data handling for production workloads 

Hands-on experience writing streaming data directly to Apache Iceberg tables using the Iceberg Spark runtime 

Ability to implement robust error handling: dead-letter queues, parse error isolation, and recovery from checkpoint failures 


Data Engineering & Iceberg 

Working knowledge of Apache Iceberg: catalog configuration, schema definition, append writes, and partition strategy for event and log data 

Familiarity with S3-compatible object storage as an Iceberg warehouse destination 

Understanding of medallion architecture — ability to correctly land streaming data in the Bronze layer with appropriate schema governance

About NTT DATA

NTT DATA is a $30 billion business and technology services leader, serving 75% of the Fortune Global 100. We are committed to accelerating client success and positively impacting society through responsible innovation. We are one of the world's leading AI and digital infrastructure providers, with unmatched capabilities in enterprise-scale AI, cloud, security, connectivity, data centers and application services. our consulting and Industry solutions help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have experts in more than 50 countries. We also offer clients access to a robust ecosystem of innovation centers as well as established and start-up partners. NTT DATA is a part of NTT Group, which invests over $3 billion each year in R&D.

Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each client’s needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us.

NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-usThis contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.


Job Segment: Developer, Java, Consulting, Database, Technology

Apply now »