Bachelor's degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent., At least 2 years of experience in big data systems such as Hadoop., Proficiency in Scala, Spark, Hbase, Hive, and RDBMS with a minimum of 2 years of experience in each., Experience with CI/CD practices for at least 1 year..
Key responsibilities:
Develop and optimize REST API services using Scala frameworks.
Troubleshoot and optimize complex queries on the Spark platform.
Build and optimize big data pipelines and architectures.
Conduct cost estimation based on design and development requirements.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Coders Brain is a global leader in IT services, digital and business solutions that partners with its clients to simplify, strengthen and transform their businesses. We ensure the highest levels of certainty and satisfaction through a deep-set commitment to our clients, comprehensive industry expertise and a global network of innovation and delivery centers.
We achieved our success because of how successfully we integrate with our clients.
Requirements • Experience in developing rest API services using one of the Scala frameworks • Ability to troubleshoot and optimize complex queries on the Spark platform • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets • Knowledge in modelling unstructured to structured data design. • Experience in Big Data access and storage techniques. • Experience in doing cost estimation based on the design and development. • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs. • Highly organized, self-motivated, proactive, and ability to propose best design solutions. • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.
Experience -Must have:
a) Scala: Minimum 2 years of experience
b) Spark: Minimum 2 years of experience
c) Hadoop: Minimum 2 years of experience (Security, Spark on yarn, Architectural knowledge)
d) Hbase: Minimum 2 years of experience
e) Hive - Minimum 2 years of experience
f) RDBMS (MySql / Postgres / Maria) - Minimum 2 years of experience
g) CI/CD Minimum 1 year of experience
Experience (Good to have):
a) Kafka
b) Spark Streaming
c) Apache Phoenix
d) Caching layer (Memcache / Redis)
e) Spark ML f) FP (Scala cats / scalaz)
Qualifications Bachelor's degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent with at-least 2 years of experience in big data systems such as Hadoop as well as cloud-based solutions
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.