Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field., 8+ years of hands-on experience in data engineering or backend software development roles., Proficiency with Python, SQL, and data pipeline orchestration tools like Apache Airflow., Strong experience with cloud-based data platforms such as AWS Redshift and Snowflake..
Key responsabilities:
Architect and implement scalable and reliable data pipelines using modern data platforms.
Develop ETL/ELT processes to ingest data from various sources.
Collaborate with data scientists and software engineers to design data models for analytics.
Lead data infrastructure projects and ensure data governance and compliance best practices.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Netomi helps companies deliver higher quality customer experiences at scale with AI. If customer service is not a priority, you don't have to read any further.
Warner Bros, Westjet, and HP trust Netomi's AI-first customer service platform to deliver eye-popping results.
Doing this all while significantly reducing cost.
Netomi is building the first Relationship Operating System and its three main benefits for companies are:
- Industry-leading resolution rates (automatically resolves 80% of routine customer service inquiries)
- Improved resolution time
- Increased customer satisfaction and support quality
People love our patented, no-code platform, which works across messaging, chat, email and voice.
The platform also understands 100+ languages - ¡Qué bueno!
Netomi is based in San Francisco and has offices in Toronto, New York, and India. Want to work from your home office? We have remote options as well!
We can't hire fast enough, join this incredible team today!
At Netomi AI, we are on a mission to create artificial intelligence that builds customer love for the world’s largest global brands.
Some of the largest brands are already using Netomi AI’s platform to solve mission-critical problems. This would allow you to work with top-tier clients at the senior level and build your network.
Backed by the world’s leading investors such as Y-Combinator, Index Ventures, Jeffrey Katzenberg (co-founder of DreamWorks) and Greg Brockman (co-founder & President of OpenAI/ChatGPT), you will become a part of an elite group of visionaries who are defining the future of AI for customer experience. We are building a dynamic, fast growing team that values innovation, creativity, and hard work. You will have the chance to significantly impact the company’s success while developing your skills and career in AI.
Want to become a key part of the Generative AI revolution? We should talk.
We are looking for a Senior Data Engineer with a passion for using data to discover and solve real-world problems. You will enjoy working with rich data sets, modern business intelligence technology, and the ability to see your insights drive the features for our customers. You will also have the opportunity to contribute to the development of policies, processes, and tools to address product quality challenges in collaboration with teams.
What You’ll Do
Architect and implement scalable, secure, and reliable data pipelines using modern data platforms (e.g., Spark, Databricks, Airflow, Snowflake, etc.).
Develop ETL/ELT processes to ingest data from various structured and unstructured sources.
Perform Exploratory Data Analysis (EDA) to uncover trends, validate data integrity, and derive insights that inform data product development and business decisions.
Collaborate closely with data scientists, analysts, and software engineers to design data models that support high-quality analytics and real-time insights.
Lead data infrastructure projects including management of data on cloud platforms (AWS/Azure), data lake/warehouse implementations, and data quality frameworks.
Ensure data governance, security, and compliance best practices are followed.
Monitor and optimize the performance of data systems, addressing any issues proactively.
Mentor junior data engineers and contribute to establishing best practices in data engineering standards, tooling, and development workflows.
Stay current with emerging technologies and trends in data engineering and recommend improvements as needed.
Required Qualifications
Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
8+ years of hands-on experience in data engineering or backend software development roles.
Proficiency with Python, SQL, and at least one data pipeline orchestration tool (e.g., Apache Airflow, Luigi, Prefect).
Strong experience with cloud-based data platforms (e.g., AWS Redshift, GCP BigQuery, Snowflake, Databricks).
Deep understanding of data modeling, data warehousing, and distributed systems.
Experience with big data technologies such as Apache Spark, Kafka, Hadoop, etc.
Familiarity with DevOps practices (CI/CD, infrastructure as code, containerization with Docker/Kubernetes).
Preferred Qualifications
Experience working with real-time data processing and streaming data architectures.
Knowledge of data security and privacy regulations (e.g., GDPR, HIPAA).
Exposure to machine learning pipelines or supporting data science workflows.
Familiarity with prompt engineering and how LLM-based systems interact with data.
Experience working in cross-functional teams and with stakeholders from non-technical domains.
Netomi is an equal opportunity employer committed to diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.