Match score not available

Advanced Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Strong proficiency in Python, SQL, PySpark, Experience with Databricks and Snowflake, Expertise in data modeling and ETL processes, Solid knowledge of Apache Spark.

Key responsabilities:

  • Design and optimize data pipelines and workflows
  • Implement ETL processes for data integration
Sequoia Global Services logo
Sequoia Global Services Startup http://www.sequoia-connect.com
11 - 50 Employees
See more Sequoia Global Services offers

Job description

Description

Our client represents the connected world, offering innovative and customer-centric information technology experiences, enabling Enterprises, Associates, and Society to Rise™.

They are a USD 6 billion company with 163,000+ professionals across 90 countries, helping 1279 global customers, including Fortune 500 companies. They focus on leveraging next-generation technologies, including 5G, Blockchain, Metaverse, Quantum Computing, Cybersecurity, Artificial Intelligence, and more, on enabling end-to-end digital transformation for global customers.

Our client is one of the fastest-growing brands and among the top 7 IT service providers globally. Our client has consistently emerged as a leader in sustainability and is recognized amongst the ‘2021 Global 100 Most sustainable corporations in the World by Corporate Knights. 

We are currently searching for a Advanced Data Engineer:

Responsibilities:

  • Design, develop, and optimize data pipelines and workflows using Databricks, Snowflake, and Apache Spark.
  • Implement ETL processes to extract, transform, and load data across multiple systems.
  • Develop robust data models and ensure high-quality data integration.
  • Utilize Python, SQL, and PySpark for advanced data engineering tasks.
  • Orchestrate data workflows and enable real-time data processing using modern frameworks.
  • Collaborate with cross-functional teams to define data integration requirements and best practices.
  • Monitor and maintain the reliability and performance of data systems.

Requirements:

  • Strong proficiency in Python, SQL, and PySpark.
  • Hands-on experience with Databricks and Snowflake.
  • Expertise in data modeling, ETL processes, and workflow orchestration.
  • Solid knowledge of Apache Spark and real-time data processing frameworks.
  • Experience with Azure cloud services.

Desired:

  • Knowledge of advanced data engineering frameworks and tools.
  • Familiarity with data integration best practices.

Languages

  • Advanced Oral English.
  • Native Spanish.

Note:

  • Fully remote

If you meet these qualifications and are pursuing new challenges, Start your application to join an award-winning employer. Explore all our job openings | Sequoia Career’s Page: https://www.sequoia-connect.com/careers/.


Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Data Engineer Related jobs