Offer summary
Qualifications:
Intermediate experience with PySpark, Proficient in Python and database management, Experience in data modeling and ETL processes, Familiarity with Agile methodologies is a plus, Exposure to cloud services like Azure, AWS, GCP.
Key responsabilities:
- Build medium complexity data pipelines for diverse data sources
- Clean, transform, and organize low complexity data
- Develop and maintain data pipelines using programming languages and tools
- Participate in team meetings to align project demands and solutions
- Structure and organize data logically ensuring consistent manipulation