Hi to all Tech Enthusiasts out there , We are hiring for a reputed IT client of ours for the below positions :
- Expertise and hands-on experience on Python/Java, Bigdata, Apache Spark, Hadoop
(*Mandatory*)
Designation: Software Engineer
Good to Have:
• Good to have exposure of Hadoop Data Lakes created using these distribution (Cloudera, Horton Works, MapR, EMR, HDInsight, DataProc)
• Exposure of the Analytics stack of any Public Cloud (AWS, Azure, GCP) will be good to have.
Experience:
5 to 12 Years
Notice Period : 15 to 20 Days
Responsibilities
• Strong expertise in SQL
• Strong expertise in one or more Big Data Querying technologies - Hive, SparkQL, Impala, Presto, Phoenix
• Strong analytical, problem-solving, data analysis and research skills.
• Ability to work with various business teams to resolve technical challenges and understand requirements.
• Demonstrable ability to interact, collaborate, drive consensus and confidence among different (of our) groups – both on-shore and off-shore.
• Demonstrable ability to think outside of the box and not be dependent on readily available tools.
Qualifications
Roles:
• Build and optimize ‘big data’ pipelines, architectures and data sets.
• Build processes supporting data transformation, data structures, data ingestion, metadata, dependency and workload management.
• Should be able to understand, create, modify and optimize SQLs.
• Should be able to work on SQL engines, Cloud Data warehouses
• Should be able to perform data validation
• Should be able to automate SQL transformations in the product.
• Work closely with Solution Architect/ Architect to drive solutions architecture, systems and interface design, data analysis, scenario and use case analysis)
Good to have:
• Hands on exposure of Hadoop Data Lakes created using these distribution (Cloudera, HortonWorks, MapR, EMR, HDInsight, DataProc)
• Exposure of the Analytics stack of any Public Cloud (AWS, Azure, GCP) will be good to have.