Return to jobs list

Senior Data Engineer

Full Time · Engineering · On-Site

Bangalore Division, Karnataka, India

Role Overview

We are looking for a Senior Data Engineer to lead the design and development of a next-generation data platform that powers real-time and batch applications at scale. This platform supports business logic, personalization, user cohorting, CRM, product analytics, and Generative AI workloads.

You’ll work cross-functionally with analysts, engineers, and product teams to build high-throughput data pipelines, ensure reliability and cost-efficiency, and empower data-driven decisions across all touchpoints — web, mobile (Android/iOS) and server.

Key Responsibilities

Data Platform & Pipeline Development

  • Build and scale real-time and batch data pipelines capable of supporting high-volume data ingestion and processing.

  • Power diverse applications across: business logic and personalization, user cohorting and CRM, product and business analytics, and Generative AI (data retrieval, prompt engineering, context feeding).

Analytics Enablement Across Platforms

  • Partner with product analysts to co-design analytics schemas that align across all client platforms.

  • Enable consistent event tracking and unified user behavior analysis.

  • Ensure availability of enriched and aggregated datasets that can be plugged directly into product and business workflows.

Lifecycle Management & Platform Efficiency

  • Own the entire lifecycle of the data platform — from design and provisioning to resource decommissioning.

  • Implement systems for:

    • Dynamic resource allocation (compute, storage, streaming)

    • Tiered data retention and archival

    • Usage-based cost governance and alerting

  • Develop tooling, playbooks, and dashboards for observability, efficiency, and compliance.

Monitoring, Governance & Reliability

  • Build proactive monitoring and alerting for data freshness, volume anomalies, and schema drift.

  • Conduct root cause analyses (RCA) for failures or inconsistencies across systems.

  • Define and enforce role-based access control (RBAC) and audit policies for secure data operations.

  • Maintain thorough documentation of data flows, architecture, and contracts.

Must-Have Qualifications

  • Experience building cloud-native data solutions on AWS (RDS, Redshift, Athena, Kinesis, Lambda, S3) and GCP (BigQuery, Dataflow, Datastream).

  • Proficiency in SQL, NoSQL, and data modeling for both OLTP and OLAP systems.

  • Strong backend engineering skills using Golang (preferred) or other typed languages.

  • Hands-on experience with event streaming platforms like Kafka, Kinesis, or RabbitMQ.

  • Deep understanding of data warehousing, pipeline orchestration, and cloud architecture.

  • Demonstrated ability to implement secure, auditable, and scalable data governance frameworks.

Good-to-Have Skills

  • Scripting expertise in Bash or Python for automation.

  • Familiarity with Apache Spark, Flink, or other big data processing engines.

  • Experience with CI/CD, Infrastructure-as-Code (Terraform, CloudFormation), and deployment automation.

  • Experience designing datasets and platforms that support both BI and machine learning workloads.