Offer summary
Qualifications:
5+ years of data engineering experience, Strong recent Spark and Python experience, On-prem experience and proficiency in Linux, In-depth knowledge of RDBMS, SQL, with Java/Scala bonus, Understanding of Hadoop, Cloud migration is a plus.
Key responsabilities:
- Design and maintain scalable data processing systems
- Optimize jobs using Kafka, Hadoop, Spark, Presto
- Monitor data quality and enhance accessibility
- Ingest, maintain and process internal/external data
- Collaborate, mentor team, improve frameworks