Return to jobs list

Sr. Data Engineer - Azure(Fabric)

Job type: Full Time · Department: Tech · Work type: On-Site

Ahmedabad, Gujarat, India

Job Title: Senior Data Engineer (Azure)

Location: Ahmedabad, Gujarat

Experience: 4+ years

Job Type: Full Time

Department: Data Engineering

About Simform:

Simform is a premier digital engineering company specializing in Cloud, Data, AI/ML, and Experience Engineering to create seamless digital experiences and scalable products. Simform is a strong partner for Microsoft, Google Cloud, and Databricks. With a presence in 5+ countries, Simform primarily serves North America, the UK, and the Northern European market. Simform takes pride in being one of the most reputed employers in the region, having created a thriving work culture with a high work-life balance that gives a sense of freedom and opportunity to grow.  

 

Role Overview:

We are looking for a Senior Data Engineer (Azure) who is passionate about managing large-scale data, eager to take on challenges, and committed to delivering exceptional results. This role requires expertise in Azure data services, ETL pipelines, and database management. If you are a proactive problem solver with a strong technical background and a team player with a positive attitude, we want to hear from you.

 

Key Responsibilities

  • Design, develop, monitor, and maintain end-to-end data pipelines on Microsoft Fabric.

  • Work with Microsoft Fabric services such as

    • Fabric Pipelines (ETL/ELT)

    • Fabric Dataflows Gen2

    • Fabric Lakehouse (Bronze/Silver/Gold architecture)

    • Fabric Warehouse (SQL analytics engine)

    • OneLake (unified storage)

    • Fabric Notebooks (PySpark / Spark SQL)

    • Direct Lake mode for Power BI

    • Fabric KQL Database / Eventstream / Real-Time Hub

    • Fabric Shortcuts (cross‑workspace & multi-cloud virtualization)

    • Semantic Models

    • Data Activator (alerting / action triggers) For ingestion, transformation, modeling, analytics, and real-time processing.

  • Develop and optimize data ingestion, transformation, and analytical workflows for structured and unstructured data.

  • Design efficient data models to ensure optimal performance in data processing and storage.

  • Implement large-scale data ingestion pipelines capable of handling TBs of data.

  • Build distributed batch or real-time data solutions using Apache Kafka, Apache Airflow, Databricks, or Fabric Real-Time Hub components.

  • Build and maintain ETL/ELT pipelines for data integration and migration.

  • Work extensively with relational databases (PostgreSQL, MySQL, SQL Server) and NoSQL databases (MongoDB, CosmosDB, Cassandra, GraphDB).

  • Optimize SQL queries, performance tuning, indexing, partitioning, and denormalization strategies.

  • Collaborate with cross-functional teams, including data scientists and software engineers, to integrate data solutions into production environments.

  • Ensure data quality, integrity, governance, lineage, and compliance with security best practices.

  • Participate in client interactions and stakeholder meetings to gather requirements and provide technical insights.

Required Skills & Qualifications

  • Bachelor’s/Master’s degree in Computer Science, Data Engineering, or a related field.

  • 4+ years of hands-on experience in data engineering, ETL development, and data pipeline implementation.

  • Proficiency in Python and SQL for data processing and analysis.

  • Strong expertise in Microsoft Fabric (Pipelines, Dataflows Gen2, Lakehouse, Warehouse, OneLake, Notebooks, Direct Lake, KQL Database, Real-Time Hub, Eventstream, Shortcuts, Semantic Models).

  • Experience with big data frameworks like Spark, Kafka, and Airflow.

  • Working knowledge of cloud data warehouse solutions such as Synapse, Snowflake, Redshift, or BigQuery.

  • Experience with NoSQL databases (MongoDB, CosmosDB, Cassandra, GraphDB) and relational databases.

  • Strong analytical and problem-solving skills.

  • Ability to work independently, mentor peers, and meet tight deadlines.

  • Excellent interpersonal and communication skills.

  • Experience with cloud-based data architecture and security best practices.

  • Fabric certifications are a plus (DP‑600, DP‑700, Fabric Analytics/Data Engineer Associate).

Preferred Qualifications (Nice to Have)

  • Experience with data lake architectures.

  • Knowledge of machine learning model deployment and data processing for AI applications.

  • Prior experience in automated data pipeline deployment using CI/CD workflows.

Why Join Us:

  • Young Team, Thriving Culture

  • Flat-hierarchical, friendly, engineering-oriented, and growth-focused culture.

  • Well-balanced learning and growth opportunities

  • Free health insurance.

  • Office facilities with a game zone, an in-office kitchen with affordable lunch service, and free snacks.

  • Sponsorship for certifications/events and library service.

  • Flexible work timing, leaves for life events, WFH, and hybrid options.