Data Engineer

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Bachelor's degree in Computer Science, Engineering, Statistics, or a related field., 3+ years of hands-on experience in data engineering, with at least 1 year working with Databricks or similar cloud-based platforms., Proficient in SQL and Python for data processing and transformation., Strong understanding of data warehousing concepts and best practices..

Key responsabilities:

  • Proactively manage and enhance data architecture for integration, quality, and consistency across systems.
  • Create efficient ETL/ELT pipelines to move and transform data into the data warehouse/lakehouse.
  • Integrate data from various source systems into Databricks while ensuring quality and security.
  • Document processes and follow best practices for data engineering, implementing automated monitoring for reliability.

Claroty logo
Claroty Cybersecurity Scaleup https://www.claroty.com/
201 - 500 Employees
See all jobs

Job description

Description

We’re growing and looking to hire a Data Engineer who embodies our core values: People First, Customer Obsession, Strive for Excellence, and Integrity, to join our regional APJ team. This skilled Data Engineer will help us build a robust, scalable, and efficient data warehouse solution using Databricks.

You will be working alongside cross-functional teams to design and implement the data architecture that powers insights and decisions across the organization.

The ideal candidate will have a strong background in data engineering, with expertise in cloud-based data platforms, ETL/ELT processes, and working with large-scale data sets. Familiarity with modeling and pipeline design is key.


About Claroty:   

Claroty is on a mission to secure cyber-physical systems across industrial, healthcare, commercial and public sector environments: the Extended Internet of Things (XIoT).

The Claroty Platform integrates with customers’ existing infrastructure to provide a full range of controls for visibility, exposure management, network protection, threat detection, and secure access.

Our solutions are deployed by over 1,000 organizations at thousands of sites across all seven continents.

Claroty is headquartered in New York City, with employees across the Americas, Europe, Asia-Pacific, and Tel Aviv.

The company is widely recognized as the industry leader in cyber-physical systems protection, with backing from the world’s largest investment firms and industrial automation vendors, as well as recognition from KLAS Research as Best in KLAS for Healthcare IoT Security, the Deloitte Technology Fast 500, the Forbes Cloud 100, and the Fortune Cyber 60.


Responsibilities

As a Data Engineer, your impact will be:

  • Data Architecture Ownership: Proactively manage and enhance our data architecture, ensuring integration, data quality, and consistency across different systems. Familiarity with modeling and design principles will be key.
  • Data Pipeline Development: Create efficient and reliable ETL / ELT pipelines that move and transform data from source systems (Salesforce, NetSuite, Hubspot and other GTM platforms) into the data warehouse / lakehouse
  • Data Integration: Integrate data from various source systems into Databricks while ensuring data quality, consistency, and security
  • Documentation and Best Practices: Document processes, workflows, and architectures, ensuring that solutions are easily understood and maintainable. Follow industry best practices for data engineering.
  • Automation: Implement automated monitoring, error detection, and alerting to ensure the reliability and stability of the data warehouse.

Requirements

  • Education: Bachelor's degree in Computer Science, Engineering, Statistics, or a related field
  • Experience: 3+ years of hands-on experience in data engineering, with at least 1 year working specifically with Databricks or similar cloud-based platforms

Technical Skills:

  • Proficient in SQL and Python for data processing and transformation.
  • Strong experience with Databricks (Apache Spark, Delta Lake, etc.) and the Databricks environment.
  • Experience in building and managing ETL/ELT pipelines in Databricks.
  • Working knowledge of cloud data storage solutions (e.g., S3, ADLS, GCS).
  • Experience with data modeling, schema design, and query optimization.
  • Cloud Platforms: Experience with cloud environments (AWS, Azure, or GCP) and cloud services such as storage, compute, and orchestration (e.g., AWS S3, Azure Blob Storage, Google BigQuery).
  • Data Warehousing: Strong understanding of data warehousing concepts and best practices, including star/snowflake schema, partitioning, indexing, and data governance.
  • Problem-Solving: Ability to troubleshoot data issues, optimize query performance, and resolve data inconsistencies.
  • Integration of Source Systems: Experience with CRM and ERP systems (e.g., Salesforce, Netsuite, Hubspot) and data integration methodologies

Required profile

Experience

Industry :
Cybersecurity
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Problem Solving

Data Engineer Related jobs