Offer summary
Qualifications:
8 to 12 years in Big Data, Expert understanding of distributed computing, Experience with Apache Spark, Hands-on with Azure Databricks and Data Factory, Proficiency in Python and SQL.Key responsabilities:
- Create and manage Big Data pipelines
- Build stream-processing systems
- Integrate data from various sources
- Performance tuning of Spark jobs
- Design and implement Big Data solutions