Match score not available

Intermediate Data Engineer


Offer summary

Qualifications:

Bachelor’s or Master’s degree in Computer Science or related field required., 3+ years of experience building data pipelines and frameworks., Proficiency in Scala and working knowledge of Apache Spark is essential., Fluency in English is required. .

Key responsabilities:

  • Build the next generation data warehouse and event stream platform.
  • Define streaming event data feeds for real-time analytics and reporting.
  • Enhance automation and performance of real-time and batch data environments.
  • Provide mentorship and collaborate with team members on various projects.

Softgic logo
Softgic https://softgic.co/
51 - 200 Employees
See all jobs

Job description

This is a remote position.

At Softgic we work with the coolest, with those who build, with those who love what they do, with those who have 100 in attitude, because that's our #Cooltura. Join our purpose of making life easier with technology and be part of our team as a Data Engineer.

Compensation:
USD 20 - 25/hour.

Location:
Remote (for México, Guatemala, Colombia, Perú, Chile, Argentina, Paraguay, Brasil, Honduras, Jamaica, Dominican, Belice, Spain, United States, Canadá, Kenya, South Africa, India, and Filipinas residents).

Mission of Softgic:
In Softgic S.A.S. we work for the digital and cognitive transformation of our clients, aware that quality is an essential factor for us, we incorporate the following principles into our policy:
  • Deliver quality products and services.
  • Achieve the satisfaction of our internal and external clients.
  • Encourage in our team the importance of training to grow professionally and personally through development plans.
  • Comply with the applicable legal and regulatory requirements.
  • Promote continuous improvement of the quality management system.
What makes you a strong candidate:
  • You have 3+ years of experience in data framework and pipeline development.
  • You are proficient in Scala.
  • You are beginner in Apache Spark.
  • English - Native or fully fluent.
Responsibilities and more:
This vacancy is 100% On-site in: Colombia, Guatemala, Mexico, Peru, Chile, Belize, United States, Canada, Spain, Dominican Republic, Jamaica, Honduras, Brazil, Paraguay, Argentina, South Africa, Kenya, India, Philippines.

We are seeking a Data Engineer to help transform our data infrastructure, migrating from relational databases to a modern big data architecture. You will play a key role in defining event-driven data feeds, improving automation, and enhancing observability, alerting, and performance.

We’ve built a strong data engineering team to date, but have a lot of work ahead of us,
including:
  • Migrating from relational databases to a streaming and big data architecture, including a complete overhaul of our data feeds.
  • Defining streaming event data feeds required for real-time analytics and reporting.
  • Leveling up our platform, including enhancing our automation, test coverage, observability, alerting, and performance.
Responsibilities:
  • Build our next generation data warehouse.
  • Build our event stream platform.
  • Translate user requirements for reporting and analysis into actionable deliverables.
  • Enhance automation, operation, and expansion of real-time and batch data environment.
  • Manage numerous projects in an ever-changing work environment.
  • Extract, transform, and load complex data into the data warehouse using cutting-edge technologies.
  • Build processes for topnotch security, performance, reliability, and accuracy.
  • Provide mentorship and collaborate with fellow team members.


Requirements
Qualifications:
  • Bachelor’s or Master’s degree in Computer Science, Information Systems, Operations
  • Research, or related field required.
  • 3+ years of experience building data pipelines.
  • 3+ years of experience building data frameworks for unit testing, data lineage tracking, and automation.
  • Fluency in Scala is required.
  • Working knowledge of Apache Spark.
  • Familiarity with streaming technologies (e.g., Kafka, Kinesis, Flink).

Nice-to-Haves:
  • Experience with Machine Learning.
  • Familiarity with Looker a plus.
  • Knowledge of additional server-side programming languages (e.g. Golang, C#, Ruby).

Benefits
  • We're certified as a Great Place to Work.
  • Opportunities for advancement and growth.
  • Paid time off.
  • Formal education and certifications support.
  • Benefits with partner companies.
  • Referral program.
  • Flexible working hours.



Salary:

USD 20 - 25/hour

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Mentorship
  • Teamwork
  • Collaboration
  • Problem Solving

Data Engineer Related jobs