Minimum 6+ years of experience in creating SPARK Jobs using Java/Scala., Strong knowledge of Big Data tools like HIVE and HBASE., Proficient in SQL and data warehouse concepts., Hands-on experience with Unix/Linux and familiarity with AWS and PySpark is a plus..
Key responsibilities:
Develop and implement data loading and transformation tasks using external sources.
Merge data, perform data enrichment, and load into target data destinations.
Utilize Spark Streaming for real-time data processing.
Analyze issues and provide solutions using strong analytical skills.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.
1. Minimum 6+ years of experience in creating SPARK Jobs using Java/Scala
2. Should have very good experience in developing data loading and transformation tasks using external sources, merge data, perform data enrichment and load in to target data destinations
3. Must have good knowledge on Big data tools HIVE and HBASE tables
4. Should have experience on Spark Streaming
5. Must have good knowledge on SQL
6. Must have good knowledge on Data warehouse concepts
7. Must have good analytical skills to analyse the issue
8. Should have Hands-on Unix/Linux knowledge
9. Knowledge on AWS, PySpark will be an advantage.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.