Big Data Engineer/ Lead/ Associate Architect.

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Expertise in Python/Java, Big Data, Apache Spark, and Hadoop is mandatory., 5 to 12 years of experience in relevant fields is required., Strong analytical, problem-solving, and data analysis skills are essential., Familiarity with SQL and Big Data Querying technologies like Hive and SparkQL is important..

Key responsibilities:

  • Build and optimize big data pipelines, architectures, and datasets.
  • Support data transformation processes and manage data structures and ingestion.
  • Collaborate with business teams to resolve technical challenges and understand requirements.
  • Automate SQL transformations and perform data validation in the product.

Intuitive Apps Inc. logo
Intuitive Apps Inc. Startup https://www.intuitiveapps.com
51 - 200 Employees
See all jobs

Job description

Hi to all Tech Enthusiasts out there , We are hiring for a reputed IT client of ours for the below positions :
- Expertise and hands-on experience on Python/Java, Bigdata, Apache Spark, Hadoop
(*Mandatory*)
Designation: Software Engineer
Good to Have:
• Good to have exposure of Hadoop Data Lakes created using these distribution (Cloudera, Horton Works, MapR, EMR, HDInsight, DataProc)
• Exposure of the Analytics stack of any Public Cloud (AWS, Azure, GCP) will be good to have.
 
Experience:
5 to 12 Years
Notice Period : 15 to 20 Days
 
Responsibilities
• Strong expertise in SQL
• Strong expertise in one or more Big Data Querying technologies - Hive, SparkQL, Impala, Presto, Phoenix
• Strong analytical, problem-solving, data analysis and research skills.
• Ability to work with various business teams to resolve technical challenges and understand requirements.
• Demonstrable ability to interact, collaborate, drive consensus and confidence among different (of our) groups – both on-shore and off-shore.
• Demonstrable ability to think outside of the box and not be dependent on readily available tools.
 
Qualifications
Roles:
• Build and optimize ‘big data’ pipelines, architectures and data sets.
• Build processes supporting data transformation, data structures, data ingestion, metadata, dependency and workload management.
• Should be able to understand, create, modify and optimize SQLs.
• Should be able to work on SQL engines, Cloud Data warehouses
• Should be able to perform data validation
• Should be able to automate SQL transformations in the product.
• Work closely with Solution Architect/ Architect to drive solutions architecture, systems and interface design, data analysis, scenario and use case analysis)
 
Good to have:
• Hands on exposure of Hadoop Data Lakes created using these distribution (Cloudera, HortonWorks, MapR, EMR, HDInsight, DataProc)
• Exposure of the Analytics stack of any Public Cloud (AWS, Azure, GCP) will be good to have.
 

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Analytical Thinking
  • Research
  • Problem Solving

Data Engineer Related jobs