Embedded Data Scientist, Chanakya
Job type: Full Time · Department: Engineering · Work type: On-Site
Delhi Division, Delhi, India; Bengaluru, Karnataka, India
Sarvam is building the bedrock of Sovereign AI for India. The company is developing India's full-stack sovereign AI platform, building across research, models, infrastructure and applications with a singular focus on making AI genuinely work for India. Sarvam works with leading enterprises and public institutions and is backed by Lightspeed, Peak XV, and Khosla Ventures. Sarvam partners with India's leading brands, including Tata Capital, SBI Life, CRED, IDFC, and LIC.
Embedded Data Scientists transform complex client data into structures that AI systems can reliably reason over. You are deployed alongside Strategic Deployment Engineers at client sites, working directly with client data environments to understand, structure, and operationalise large-scale datasets.
This means working with heterogeneous, multimodal data — including documents, images, audio, geospatial data, and structured records — and designing the semantic structures that allow AI systems to interpret and reason over that data.
You will define how data is represented inside the AI system: how documents are segmented, how metadata is defined, how entities and relationships are represented, and how different data modalities connect. You will design ontologies, tagging systems, and knowledge graph structures that allow the reasoning engine to operate effectively.
You will often work with classified or operationally sensitive datasets in environments where standard tooling may not exist. You will own the quality of the data layer in your assigned accounts, ensuring the system is built on a foundation that enables reliable reasoning at scale.
Understand the client's data landscape across documents, imagery, audio, geospatial data, and structured records — including data sources, formats, workflows, and domain terminology
Design domain ontologies representing entities, relationships, hierarchies, and operational concepts within the client's data environment
Define document segmentation and chunking strategies that preserve semantic meaning and support effective retrieval
Work with heterogeneous datasets and define how different modalities should be indexed, embedded, and linked
Collaborate with Strategic Deployment Engineers to translate semantic structures into operational data ingestion pipelines
Evaluate how well the AI system retrieves and reasons over client data, and refine structures to improve performance
Collaborate with the models and other teams to define benchmarks and evaluation criteria that reflect real-world deployment conditions
Translate insights from client data environments into structured signals for product and engineering teams
2–5 years in data science, applied machine learning, or large-scale data analysis roles
Strong Python skills including pandas, NumPy, and modern NLP or LLM tooling
Solid grounding in ML fundamentals — enough to understand model behaviour, contribute to evaluation design, and collaborate with a models team on training and benchmarking
Experience working with large unstructured datasets including documents, transcripts, reports, or operational records
Familiarity with LLM-based systems, retrieval pipelines, or vector search systems
Experience designing or working with data schemas, metadata frameworks, entity models, or semantic data structures
You've worked with real-world, messy, unstructured data and built something rigorous from it
You are comfortable designing structure where none exists — defining schemas, ontologies, and metadata frameworks from scratch
You can translate complex data insights into explanations that engineers and client stakeholders can act on
You are comfortable operating with autonomy in client environments; you don't need a data team around you to do rigorous work
You move fluently between domain understanding, data modelling, and AI system design
You move between the technical and the operational: you understand what the data means in the context of what operators actually do with it
Familiarity with knowledge graphs, ontologies, or semantic data modelling
Experience with multimodal datasets (text, imagery, audio, geospatial, or structured data)
Experience operating in constrained or air-gapped environments
Sarvam is a fast-moving, high talent-density team building full-stack AI for India, working on problems that push the frontiers of AI with real population-scale impact.
Work alongside researchers, engineers, builders, and business leaders who move fast and hold each other to a very high bar
High ownership and high impact, from day one
Everything we do is AI-first, from the way we build and ship to the way we think about problems
You can work on problems that could change how an entire country learns, works, and communicates
If you want to work on problems at the frontier of AI in India, Sarvam is the place to be.
Autofill application
Save time by importing your resume in one of the following formats: .pdf or .docx.