Return to jobs list

Senior Machine Learning Engineer

Job type: Full Time · Department: Machine Learning Engineer · Work type: On-Site

Palo Alto, California, United States

About Us

At Archetype AI, we’re building the world’s first physical AI platform to bring artificial intelligence into the real world. Our foundation model, Newton, understands the physical world through objective sensor data and generates real-time insights into complex physical behaviors, from industrial machinery and systems to wearable devices and smart environments.

Formed by a high-caliber team from Google and backed by one of Silicon Valley’s most renowned venture funds, Archetype AI is in a pre-Series A phase and rapidly advancing its technology for the next big leap. This is a unique opportunity to join an exciting, fast-growing AI team based in the heart of Silicon Valley.

Our team is headquartered in Palo Alto, California, with team members throughout the US and Europe.

We are actively growing, so if you are an exceptional candidate excited to work on the cutting edge of physical AI and don’t see a role that exactly fits you below you can contact us directly with your resume via jobs<at>archetypeai<dot>io.

Role Overview

We are seeking a Senior Machine Learning Engineer to join our cross-functional team of researchers, engineers, and product developers. In this role, you will be responsible for turning advanced AI models for sensor data into production-ready systems. You’ll own the integration of models into our product platform, ensuring performance, scalability, and reliability with real-time streaming inputs. This role combines deep software engineering expertise with practical ML deployment experience, working closely with researchers, platform engineers, and product teams to deliver high-impact AI capabilities in production environments.

Key Responsibilities

  • Productize Research Models

    • Integrate cutting-edge models into production systems, adapting them to meet platform architecture requirements, API specifications, and deployment constraints.

    • Implement clean interfaces for developers to adjust inference and data stream parameters.

  • ML Pipeline Engineering

    • Design and implement modular, end-to-end ML pipelines for real-time streaming sensor data, engineering for scalability, reliability, and performance under variable data loads.

    • Implement efficient embedding-based components such as dimensionality reduction, similarity search, clustering, and real-time inference on live data streams.

  • CI/CD & Testing

    • Implement integration and regression tests, automated validations, and CI/CD checks to ensure model stability across iterations and deployments.

  • Data Management

    • Build resilient data ingestion and preprocessing pipelines for both recorded and streaming data.

    • Handle noisy, inconsistent real-world signals with fault tolerance and observability in mind.

  • Platform & Product Collaboration

    • Work closely with platform engineers, product leads, and researchers to shape system architecture and product direction.

    • Contribute to tooling, infrastructure, and feature decisions for scalable AI deployment in physical environments.

Qualifications

  • 10+ years of experience in software engineering for ML/AI, with a strong record of deploying models into production.

  • Proven experience productizing models for streaming, real-time data in latency-sensitive environments.

  • Excellent software engineering and architecture skills, with the ability to understand complex systems and write clean, modular, and efficient code.

  • Strong grasp of modern ML/AI architectures, including transformers, embedding models, and foundation models.

  • Proficiency in Python and ML frameworks like PyTorch or TensorFlow.

  • Strong written and verbal communication skills.

  • Ability to thrive in a fast-paced, remote-friendly, and asynchronous startup culture.

Preferred Qualifications

  • Experience with multivariate time series or sensor data.

  • Familiarity with cloud-scale ML infrastructure (e.g., AWS, GCP, Azure, or Kubernetes).

  • Experience with MLOps, e.g. data versioning, model lifecycle management, and scalable deployment pipelines.

Made with