Return to jobs list

Embedded Infrastructure Engineer, Chanakya

Job type: Full Time · Department: Engineering · Work type: On-Site

Delhi Division, Delhi, India; Bengaluru, Karnataka, India

About Sarvam

Sarvam is building the bedrock of Sovereign AI for India. The company is developing India's full-stack sovereign AI platform, building across research, models, infrastructure and applications with a singular focus on making AI genuinely work for India. Sarvam works with leading enterprises and public institutions and is backed by Lightspeed, Peak XV, and Khosla Ventures. Sarvam partners with India's leading brands, including Tata Capital, SBI Life, CRED, IDFC, and LIC.

 

About the Role

Embedded Infrastructure Engineers design, build, and maintain the data infrastructure that underpins AI system deployments at client sites. You work alongside Embedded Data Scientists and Strategic Deployment Engineers to ensure that terabyte-scale datasets can be ingested, stored, queried, and served to AI reasoning engines reliably and performantly.

This means building and operating data platforms that handle terabyte-scale persistent data stores and sustained large daily ingestion volumes across structured records, documents, imagery, audio, and geospatial data. You will design and maintain the databases, object stores, ingestion pipelines, and processing layers that make this possible.

You will make critical decisions about storage architecture, indexing strategies, pipeline orchestration, and system performance — often in constrained, air-gapped, or operationally sensitive environments where you cannot rely on managed cloud services or standard enterprise tooling. You will own the reliability and performance of the infrastructure layer in your assigned accounts.

 

What You'll Do

•   Design and operate data storage architectures (relational, document, vector, object storage) capable of managing terabyte-scale datasets across multiple modalities

•   Build and maintain ingestion pipelines that reliably process daily data influx — including batch and streaming workloads — with monitoring, error handling, and backpressure management

•   Implement indexing, partitioning, and query optimisation strategies that allow AI systems and data scientists to retrieve and reason over large datasets with acceptable latency

•   Work with Embedded Data Scientists to translate ontologies, schemas, and semantic structures into performant physical data models and storage configurations

•  Deploy and manage database systems, vector stores, and search infrastructure in air-gapped, on-premise, or security-constrained environments

•  Build observability into the data platform: monitor pipeline health, storage utilisation, query performance, and ingestion lag

•  Own capacity planning and scaling decisions for data infrastructure across assigned client deployments

•  Collaborate with product and engineering teams to feed infrastructure learnings back into the core platform and tooling

 

What We're Looking For

•   4–8 years in data infrastructure, data engineering, platform engineering, or site reliability engineering, ideally at organisations operating at significant data scale

•   Direct experience managing multi-terabyte data stores — you have personally built or operated systems handling 10TB+ of persistent data and sustained high-throughput ingestion

•   Deep working knowledge of at least two of: PostgreSQL, MongoDB, Elasticsearch, ClickHouse, or comparable systems — including tuning, indexing, partitioning, and operational management

•   Experience building production data ingestion pipelines using Apache Kafka, Apache Spark, Airflow, Flink, dbt, or equivalent frameworks

•  Strong proficiency in Python and/or Go, with experience writing production infrastructure tooling and automation

•  Solid understanding of storage systems and formats: object storage (S3/MinIO), columnar formats (Parquet, ORC), and how to choose the right storage layer for the workload

•  Familiarity with containerisation and orchestration (Docker, Kubernetes) in production settings

•  Experience with infrastructure-as-code and deployment automation (Terraform or similar)

 

Bonus Points

•  Experience with vector databases or embedding stores (Milvus, Weaviate, Qdrant, pgvector, or similar)

•  Experience deploying and operating infrastructure in air-gapped, on-premise, or hybrid environments

 

Note: We are looking for people who can own the outcomes described here, not

people who match every line of this specification. If this problem excites you and

you believe you can do this work; we want to hear from you.

 

Why Sarvam?

Sarvam is a fast-moving, high talent-density team building full-stack AI for India, working on problems that push the frontiers of AI with real population-scale impact.

•  Work alongside researchers, engineers, builders, and business leaders who move fast and hold each other to a very high bar

•  High ownership and high impact, from day one

•  Everything we do is AI-first, from the way we build and ship to the way we think about problems

•  You can work on problems that could change how an entire country learns, works, and communicates

 

If you want to work on problems at the frontier of AI in India, Sarvam is the place to be.

 

Made with