Match score not available

Big Data Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

Bachelor’s degree in Computer Science or related field., 3+ years of experience as a Big Data Engineer., Proficiency in big data frameworks and SQL., Hands-on experience with Python and PySpark..

Key responsabilities:

  • Develop and optimize data pipelines and workflows.
  • Collaborate on data warehousing strategies and drive insights.
Evnek logo
Evnek https://www.Evnek.com
11 - 50 Employees
See all jobs

Job description

This is a remote position.

Role : Big Data  Developer
Years : 3+
Notice Period: Immediate Joiners
Key Responsibilities:
  • Develop, maintain, and optimize large-scale data pipelines and workflows using Spark, PySpark, Airflow, and other Big Data frameworks.
  • Work extensively with SQL, Impala, Hive, and PL/SQL to perform advanced data transformations and analytics.
  • Design and implement scalable data storage and retrieval systems, ensuring high availability and performance.
  • Utilize Sqoop and other data ingestion tools to integrate structured and unstructured data from various sources into Hadoop-based systems.
  • Collaborate with cross-functional teams to define and implement data warehousing strategies, leveraging business intelligence tools to drive insights.
  • Develop scripts and automate processes using Python and Unix/Linux to ensure efficient data handling and monitoring.
  • Troubleshoot and resolve issues with big data pipelines, ensuring data quality and integrity across platforms.
  • Stay updated on the latest trends in big data technologies and recommend new tools and techniques to enhance data processing and analytics capabilities.

Qualifications:

  • Education: Bachelor’s degree in Computer Science, Information Systems, or a related field.
  • Experience:
    • 3+ years of experience as a Big Data Engineer or similar role.
    • Proficiency in big data frameworks such as Sqoop, Spark, Hadoop, Hive, and Impala.
    • Strong SQL, Impala, Hive, and PL/SQL skills, with a deep understanding of query optimization and performance tuning.
    • Solid understanding of data warehousing concepts and experience with business intelligence (BI) tools like Tableau, Power BI, or Looker.
    • Hands-on experience with Python programming, including PySpark, for data manipulation and pipeline development.
    • Knowledge of Airflow for workflow orchestration and scheduling.
    • Working knowledge of Unix/Linux environments, including shell scripting.


Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Data Engineer Related jobs