Match score not available

Data Engineer

Remote: 
Full Remote
Contract: 
Experience: 
Mid-level (2-5 years)
Work from: 

OpenSC logo
OpenSC http://www.opensc.org
11 - 50 Employees
See all jobs

Job description

Why us?

Transforming global food systems is key to tackling the climate crisis and protecting people and the planet. We believe a new and different type of transparency is needed to drive that change.

That is why we built OpenSC - co-founded by WWF and The Boston Consulting Group

Our platform uses data directly from the source of our customer’s supply chains and analyses it to create direct actionable insights about sustainable & ethical production.

Automated - Continuous - Granular - Immutable

This new type of transparency enables our customers to know, influence, and prove the sustainability of their supply chains.
Your mission
  • Further design and build OpenSC’s data platform across all data domains: collection, storage, transformation and analyses

  • Develop high-quality data pipelines that transform structured, semi-structured, and unstructured data from various sources into a standardized data model

  • Expand OpenSC’s data model to accommodate a growing number of use cases

  • Assume responsibility for the business interpretation of data and actively contribute to data science and analytics projects

  • Adhere to software engineering best practices regarding version control, testing, code reviews, deployment etc. 

  • Provide support and guidance on data security and cloud infrastructure topics, ensuring a secure and efficient handling of data

  • Communicate effectively with technical and non-technical stakeholders

  • Contribute elevating the team through knowledge sharing on data engineering best practices

Your profile
  • 4+ years of relevant industry experience building production-level data products

  • Proven solid foundation of core data concepts (e.g. integration, modeling, security, lineage, governance etc.)

  • Excellent Python skills and advanced competencies with main data ecosystem libraries

  • Advanced knowledge of SQL and relational databases (e.g. PostgreSQL, dbt)

  • Knowledge of Cypher and graph databases (e.g. Neo4j, Kuzu, etc.)

  • Experience with data pipeline orchestration tools (e.g. Dagster, Airflow, etc.)

  • Experience working with cloud infrastructure and services (ideally on AWS)

  • Experience working with different data modeling patterns and ability to abstract complex real-life information into a standardized data model description

  • Demonstrated familiarity with Version control (Git), Containerization (Docker, Kubernetes), CI/CD (e.g. CircleCi, Github Actions), IaC (Terraform)

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Communication

Data Engineer Related jobs