This role is for one of the Weekday's clients
Salary range: Rs 3000000 - Rs 4000000 (ie INR 30-40 LPA)
Min Experience: 5 years
Location: Bengaluru
JobType: full-timeWe are seeking an experienced Data Engineer to design, build, and maintain robust data pipelines and architectures that power analytics, interoperability, and data-driven decision-making across our healthcare ecosystem. The ideal candidate will have a strong background in Azure data services, ETL development, Databricks, and hands-on experience with FHIR (Fast Healthcare Interoperability Resources) standards. This role requires a mix of technical proficiency, problem-solving ability, and a deep understanding of healthcare data integration.
Requirements
Key Responsibilities
- Data Pipeline Development:
Design, develop, and optimize scalable ETL/ELT workflows to extract, transform, and load structured and unstructured healthcare data into Azure-based data platforms (e.g., Azure Data Lake, Azure Synapse, Databricks). - FHIR Data Integration:
Implement and manage FHIR-based data models and APIs for healthcare data interoperability. Ensure compliance with healthcare data standards and regulatory frameworks. - Data Architecture & Modeling:
Develop logical and physical data models to support data warehousing, reporting, and analytics needs. Ensure data accuracy, integrity, and security across all stages of the data lifecycle. - Databricks Development:
Build and manage scalable data processing solutions using Azure Databricks. Optimize performance of large-scale data transformations using PySpark, SQL, or Scala. - Data Quality & Governance:
Establish and maintain data quality frameworks, validation processes, and governance standards. Ensure consistent and compliant data handling across systems. - Collaboration & Stakeholder Management:
Partner with data scientists, analysts, architects, and business teams to deliver reliable and accessible data assets. Translate business requirements into technical solutions. - Performance Optimization:
Monitor, troubleshoot, and enhance data pipelines for performance, scalability, and cost efficiency in the Azure environment. - Documentation & Best Practices:
Maintain detailed documentation of architecture, data flows, and processes. Contribute to continuous improvement by implementing data engineering best practices and automation.
Required Skills & Qualifications
- Education:
Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Engineering, or a related field. - Technical Skills:
- Strong expertise in Azure Data Services – Azure Data Factory, Azure Synapse, Azure Data Lake, and Azure SQL.
- Hands-on experience with Databricks for data processing, transformation, and analytics.
- Proven ability to design and maintain ETL/ELT pipelines using modern data engineering tools and frameworks.
- Proficiency in SQL, Python, or Scala for data manipulation and automation.
- Solid understanding of FHIR standards, data schemas, and API integration for healthcare data exchange.
- Experience with data modeling, metadata management, and data governance frameworks.
- Familiarity with CI/CD, Git, and DevOps practices in data engineering environments.
- Experience:
5–10 years of professional experience in data engineering, with at least 2 years of exposure to healthcare or interoperability projects leveraging FHIR. - Soft Skills:
- Strong analytical and problem-solving abilities.
- Excellent communication and collaboration skills.
- Ability to work in a fast-paced, agile environment with cross-functional teams.
Preferred Qualifications
- Experience with HL7, CDA, or other healthcare interoperability standards.
- Exposure to machine learning data pipelines or advanced analytics environments.
- Azure certifications (e.g., Azure Data Engineer Associate) are a plus.