Match score not available

Senior Data Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

5+ years experience in data engineering or relevant field, Familiarity with Databricks, Airflow, Spark, Snowflake, AWS.

Key responsabilities:

  • Drive data platform architecture and modernization
  • Ensure reliability and cost-effectiveness of data pipeline
  • Assist customer support teams in problem-solving
  • Contribute to data governance and security practices
  • Participate in on-call rotation for team
CM Group logo
CM Group
1001 - 5000 Employees
See more CM Group offers

Job description

The Company

Marigold is the largest sender of personalized email on the planet. But we’re so much more than an email provider or cross-channel marketing hub. We’re committed to creating true partnerships with our clients, not just being another vendor. Working with some of the biggest names in ecommerce and publishing, we help deliver personalized email, mobile messaging, and onsite experiences to billions of consumers every year.

The Role

Marigold Engage by Sailthru is putting together a team to support and operate our data engineering platform, including data warehouse, pipelines and machine learning systems.

This job requires a high level of technical competency and a desire to own and evolve the data platform that our product relies upon. If you’re passionate about building cutting-edge data solutions, we want you!

Responsibilities

  • Driving the technical direction of our data platform’s architecture, whilst modernizing legacy components.
  • Ensuring reliable and cost-effective operation of our data pipeline and warehouses. This makes up a critical component of our product and is a key production platform for us.
  • Helping our customer success and support teams with escalations, and work with the team to diagnose and fix rare and interesting problems.
  • Being part of our regular on-call rotation with the other team members (approximately 4x people).
  • Drive our data governance and security practices.

Requirements

  • This isn’t your first swim in the data lake. You have experience working with technologies such as Databricks, Airflow, Spark, Snowflake, AWS and can hit the ground running helping us grow and develop our architecture.
  • Approximately 5+ years of experience in data engineering or other relevant technical field.
  • Databricks and AWS is strongly desired.
  • Unity Catalog, Snowflake, Spark experience helpful.
  • Comfort writing and reviewing code written in Python and Java.
  • An understanding of applications that contribute to and consume from the data lake, including event-driven architecture, Kafka and a conventional SaaS stack.
  • Know enough AWS to understand S3, IAM, compute workloads and keeping costs under control.
  • An interest in moving into more of the data science side of data.
  • (nice to have) Experience with machine learning including training models.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Data Engineer Related jobs