Match score not available

Data Engineer

fully flexible
Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

2-3 years of experience in data engineering with a focus on scalable data pipelines., Proficiency in Spark, Python, and SQL for data processing., Hands-on experience with Kubernetes and AWS services like S3 and Lambda., Strong problem-solving skills and ability to work collaboratively in a team..

Key responsabilities:

  • Build and maintain scalable and reliable data pipelines for data enrichment workflows.
  • Integrate enriched data into broader data systems while ensuring quality and alignment with business needs.
  • Contribute to code reviews and follow development workflows for quality assurance.
  • Collaborate with team members to address challenges and escalate issues as needed.

H1 logo
H1 Scaleup https://h1.co/
201 - 500 Employees
See all jobs

Job description

At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle.  Visit h1.co to learn more about us.

WHAT YOU'LL DO AT H1

As a Software Engineer on the Insights Engineering team Team, you will play a key role in building scalable data pipelines and enrichment workflows that transform raw client data into accurate, actionable insights. With minimal guidance, you will focus on data ingestion and enrichment, ensuring seamless integration of client data from diverse sources (CSV, Parquet, JSON, APIs) while tackling challenges related to scalability, data quality, and standardization.

You will:
- Build and maintain scalable and reliable data pipelines that support the team’s enrichment workflows.
- Integrate enriched data from core platforms into broader data systems, applying necessary transformations and aligning with business requirements.
- Contribute to code reviews, holding a high bar for quality and aligning with organizational engineering guidelines.
- Follow development workflows, including coding, testing, deployment, and monitoring, to ensure quality and efficiency.
- Work collaboratively with team members and escalate issues appropriately when challenges arise.
- Contribute to the understanding and execution of tasks with a strong focus on accuracy, scalability, and performance.


ABOUT YOU

You have strong hands-on technical skills and experience in data engineering, with a track record of building and maintaining scalable data systems and pipelines. You excel at solving data engineering challenges and contributing to innovative solutions.

- Experience developing and optimizing data workflows, applying business logic for data enrichment, and addressing technical challenges with creative solutions
- Strong knowledge of building and scaling data infrastructure, including integration with core platforms
- Experience working with data quality challenges and implementing validation mechanisms
- Self-motivated with the ability to manage tasks and collaborate effectively within a team
- Ability to align work with broader organizational goals and contribute to strategic initiatives
- Proactively identifies potential risks and helps implement solutions early in the project lifecycle
- Eager to learn, grow, and contribute to a collaborative, high-performing engineering team

REQUIREMENTS

- 2- 3 years of experience in data engineering, specializing in building scalable data pipelines and enrichment processes, with a track record of working with large datasets, including ingestion, transformation, and optimization
- Proficiency in Spark, Python, and SQL for building scalable data processing pipelines
- Hands-on experience with Kubernetes for container orchestration and deployment
- Strong background in AWS, including services such as S3, Lambda, ECS, and RDS for data infrastructure
- Experience with EMR and Databricks to optimize large-scale data workflows

Good to Have - Understanding of optimizing LLM usage in production, with experience integrating LLMs into real-world applications and applying LLM-powered insights within data pipelines or customer-facing solutions


Not meeting all the requirements but still feel like you’d be a great fit? Tell us how you can contribute to our team in a cover letter! 
H1 OFFERS
- Full suite of health insurance options, in addition to generous paid time off
- Pre-planned company-wide wellness holidays
- Retirement options
- Health & charitable donation stipends
- Impactful Business Resource Groups
- Flexible work hours & the opportunity to work from anywhere
- The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Self-Motivation
  • Problem Solving

Data Engineer Related jobs