Data Engineer_Mukta_Tookitaki

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor's degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent., At least 2 years of experience in big data systems such as Hadoop., Proficiency in Scala, Spark, Hbase, Hive, and RDBMS with a minimum of 2 years of experience in each., Experience with CI/CD practices for at least 1 year..

Key responsibilities:

  • Develop and optimize REST API services using Scala frameworks.
  • Troubleshoot and optimize complex queries on the Spark platform.
  • Build and optimize big data pipelines and architectures.
  • Conduct cost estimation based on design and development requirements.

CodersBrain logo
CodersBrain SME https://www.codersbrain.com/
201 - 500 Employees
See all jobs

Job description

Requirements • Experience in developing rest API services using one of the Scala frameworks • Ability to troubleshoot and optimize complex queries on the Spark platform • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets • Knowledge in modelling unstructured to structured data design. • Experience in Big Data access and storage techniques. • Experience in doing cost estimation based on the design and development. • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs. • Highly organized, self-motivated, proactive, and ability to propose best design solutions. • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team. 
Experience -Must have:
a) Scala: Minimum 2 years of experience
b) Spark: Minimum 2 years of experience
c) Hadoop: Minimum 2 years of experience (Security, Spark on yarn, Architectural knowledge)
d) Hbase: Minimum 2 years of experience
e) Hive - Minimum 2 years of experience
f) RDBMS (MySql / Postgres / Maria) - Minimum 2 years of experience
g) CI/CD Minimum 1 year of experience
Experience (Good to have):
a) Kafka
b) Spark Streaming
c) Apache Phoenix
d) Caching layer (Memcache / Redis)
e) Spark ML f) FP (Scala cats / scalaz)
Qualifications Bachelor's degree in IT, Computer Science, Software Engineering, Business Analytics or equivalent with at-least 2 years of experience in big data systems such as Hadoop as well as cloud-based solutions

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Multitasking
  • Time Management
  • Teamwork
  • Proactivity
  • Troubleshooting (Problem Solving)
  • Self-Motivation

Related jobs