Offer summary
Qualifications:
Bachelor's degree in Computer Science or related field, 7+ years in cloud data engineering, Expertise in Apache Spark and Databricks, Proven experience with AWS, Azure, GCP, Strong programming skills in Python, Scala, and SQL.
Key responsabilities:
- Architect and maintain scalable ETL pipelines using Databricks & Spark
- Design data lakes and warehouses on cloud platforms
- Optimize performance of data processing workflows
- Collaborate with cross-functional teams to deliver data solutions
- Document ETL pipelines and maintain existing systems