Experience: 3-12 years
LOCATION: REMOTE
Responsibilities
Sound knowledge in Spark architecture and distributed computing
· Proficient in Spark – including RDD and Dataframes core functions, troubleshooting and performance tuning.
· Good understanding in object-oriented concepts and hands on experience on Scala or on java with excellent programming logic and technique
· Good in functional programming and OOPS concept either on Scala or Java
· Good experience in SQL – should be able to write complex queries.
Managing the team of Associates and Senior Associates and ensuring the utilization is maintained across the project.
· Able to mentor new members for onboarding to the project.
· Understand the client requirement and able to design, develop from scratch and deliver
· AWS cloud experience would be preferable.
· Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services - DynamoDB, RedShift, Kinesis, Lambda, S3, etc. (prefered)
· Hands on experience utilizing AWS Management Tools (CloudWatch, CloudTrail) to proactively monitor large and complex deployments (prefered)
· Experience in analyzing, re-architecting, and re-platforming on-premise data warehouses to data platforms on AWS(prefered)
· Leading the client calls to flag off any delays, blockers, escalations and collate all the requirements.
· Managing project timing, client expectations and meeting deadlines.
· Should have played project and team management roles.
· Facilitate meetings within the team on regular basis.
· Understand business requirement and analyze different approaches and plan deliverables and milestones for the project.
· Optimization, maintenance, and support of pipelines.
· Strong analytical and logical skills.
· Ability to comfortably tackling new challenges and learn
Qualifications
Scala/Java
• Spark
• AWS preferable/Any cloud
• SQL (Intermediate to advanced level)
• Object-Oriented Programming
• Good knowledge on ETL/ELT processes and tools