Match score not available

Lead Data Engineer (Romania / Ukraine)

Remote: 
Full Remote
Contract: 
Salary: 
84 - 90K yearly
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

5-7 years of data engineering experience, Proficiency in Python and SQL, Experience with Databricks and ETL processes, Knowledge of AWS or GCP cloud platforms, Familiarity with visualization tools like Sisense.

Key responsabilities:

  • Design scalable data lake and pipelines
  • Drive system design and inter-team alignment
  • Work across ETL, warehousing, and cloud systems
  • Define data management and governance best practices
  • Mentor engineers and communicate strategies to stakeholders
AmorServ logo
AmorServ Information Technology & Services Startup https://linktr.ee/
11 - 50 Employees
See more AmorServ offers

Job description

Role: Lead Data Engineer

Location: Romania, Ukraine (Remote)

Years of Experience: 5-7 years

Pay: $84,000 - $90,000 PA

Required Skill: Python, SQL,Databricks, Snowflake, ETL, AWS, GCP, Airbyte, Postgres, Kafka, Sisense, CircleCI, Grafana, Kubernetes (EKS)

Language Required: English C1 Level

Job Description

This role emphasizes creating robust, scalable, and secure data lake, ingestion, and distribution systems. As a key leader, you will work across the data stack and collaborate with cross-functional teams, setting high standards in data governance, quality, and security.

Key Responsibilities

  • Data Lake and Pipelines: Design and implement features for ingestion, enrichment, and transformation, ensuring availability, reliability, and scalability.
  • System Design and Project Alignment: Drive system design, creating alignment and feasibility for projects across teams.
  • Full Data Stack: Work across ETL processes, data warehousing, visualization, and cloud infrastructure.
  • Data Management and Governance: Define and implement best practices to ensure data quality, security, and consistency.
  • Optimization: Develop and optimize Spark jobs, notebooks, and pipelines in Databricks.
  • Collaboration and Mentorship: Partner with the Chief Architect, mentor engineers, and support DataOps culture.
  • Stakeholder Communication: Communicate data strategies and plans to stakeholders across the organization.

Required Skills and Experience

  • Data Engineering: 5+ years in data pipeline design and implementation using Databricks and Python (or PySpark).
  • SQL & Visualization: Proficiency in SQL and visualization tools such as Sisense or Power BI.
  • Cloud Platforms: Experience with AWS or GCP, focusing on cloud-native data engineering.
  • ETL and Data Governance: Expertise in ETL processes and data governance principles to maintain quality and consistency.

Must-Have:

  • Python, SQL, Databricks, ETL processes
    AWS or GCP
  • Visualization tools (Sisense or similar)
  • Airbyte or comparable ETL tools
  • Terraform for environment templating, CI/CD tools (CircleCI, GitHub Actions

Nice to Have:

  • Kedro, Kafka, and data mesh design
  • Postgres, Terraform, CircleCI, Grafana
  • Knowledge of microservices and data modeling

Additional Skills

  • Technical: Designing large-scale data systems, SQL/NoSQL databases, and familiarity with streaming services like Kafka.
  • Soft Skills: Strong client-facing abilities, strategic planning, and stakeholder management.

Our Tech Stack

  • Python, SQL, Databricks, Snowflake, Airbyte, Postgres, Kafka, Sisense, CircleCI, Grafana, Kubernetes (EKS)

(Knowledge of deprecated systems like Druid or Datadog is a plus)

Required profile

Experience

Level of experience: Senior (5-10 years)
Industry :
Information Technology & Services
Spoken language(s):
EnglishEnglish
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Mentorship

Data Engineer Related jobs