Bachelor’s or master’s degree in computer science, Engineering, or a related field., 3-5 years of experience in data engineering or database development., Proficiency in SQL and familiarity with cloud platforms like AWS, Azure, or GCP., Strong problem-solving skills and ability to communicate technical concepts to non-technical stakeholders..
Key responsibilities:
Develop and implement data pipelines to process and transform large volumes of data.
Collaborate with business partners to translate requirements into technical specifications.
Optimize data storage and processing systems for performance and scalability.
Ensure data security and compliance with regulatory requirements.
Report This Job
Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
The AllergyMoms website provides support for parents of kids with food allergies. We feature articles and products for children with peanut allergy, milk allergy, egg allergy, wheat allergy, nut allergy, soy allergy, sesame allergy, shellfish allergy and multiple food allergies. We also provide new food allergy recipes (including dairy-free, gluten-free, and peanut-free recipes) as well as the latest news and research on breast-feeding, hypoallergenic formulas, eczema, reflux, anaphylaxis and school policy.
Vital Care (www.vitalcare.com) is the premier pharmacy franchise business with franchises serving a wide range of patients, including those with chronic and acute conditions. Since 1986, our passion has been improving the lives of patients and healthcare professionals through locally-owned franchise locations across the United States. We have over 100 franchised Infusion pharmacies and clinics in 35 states, focusing on the underserved and secondary markets. We know infusion services, and we guide owners along the path of launch, growth, and successful business operations.
What we offer:
Comprehensive medical, dental, and vision plans, plus flexible spending, and health savings accounts.
Paid time off, personal days, and company-paid holidays.
Paid Paternal Leave.
Volunteerism Days off.
Income protection programs include company-sponsored basic life insurance and long-term disability insurance, as well as employee-paid voluntary life, accident, critical illness, and short-term disability insurance.
401(k) matching and tuition reimbursement.
Employee assistance programs include mental health, financial and legal.
Rewards programs offered by our medical carrier.
Professional development and growth opportunities.
Employee Referral Program.
Job Summary:
The Data Engineer will be responsible for development and implementation of technology and data solutions to meet current and future data warehousing and reporting needs. The experienced engineer will collaborate with the Technical Lead and business partners to build and support the company's data warehouse, ETL and reporting platforms.
Duties/Responsibilities:
Data Architecture and Design:
Implement scalable and efficient data pipelines to ingest, process, and transform large volumes of structured and unstructured data from multiple sources.
Define and implement data models, schemas, and storage solutions that optimize performance, reliability, and scalability.
Implement data storage technologies, frameworks, and platforms based on the organization's requirements and objectives.
Data Pipeline Development and Automation:
Develop and maintain robust, fault-tolerant data pipelines using ETL (Extract, Transform, Load) processes.
Implement data validation, testing, error handling, and monitoring mechanisms to ensure data quality and integrity throughout the pipeline lifecycle.
Automate data workflows, scheduling, and orchestration tasks using tools such as Apache Airflow, Luigi, or similar frameworks.
Data Integration and Transformation:
Integrate data from various internal and external sources, including databases, APIs, logs, and streaming platforms, to support business analytics, reporting, and machine learning initiatives.
Transform raw data into meaningful insights and actionable information through data cleansing, enrichment, normalization, and aggregation techniques.
Collaborate with business partners to translate business requirements into technical specifications and data processing workflows.
Performance Optimization and Scalability:
Optimize the performance and efficiency of data pipelines, storage systems, and processing engines to meet service level agreements (SLAs) and performance targets.
Identify and resolve performance bottlenecks, scalability issues, and resource constraints through system tuning, caching strategies, and infrastructure scaling.
Data Security and Compliance:
Implement and enforce data security controls, encryption mechanisms, and access policies to protect sensitive information and ensure compliance with regulatory requirements.
Monitor data access patterns, audit trails, and user activities to detect and mitigate potential security threats and data breaches.
Documentation and Collaboration:
Document data pipelines, workflows, and technical specifications to facilitate knowledge sharing, collaboration, and troubleshooting.
Required Skills/Abilities:
Proficiency in SQL and other programming languages with data processing frameworks.
Strong understanding of distributed systems, data modeling, database design principles, and performance optimization techniques.
Hands-on experience with cloud platforms (e.g., AWS, Azure, GCP) and their data services (e.g., S3, Redshift, BigQuery, Dataflow).
Familiarity with containerization and orchestration tools like Docker, Kubernetes, and infrastructure-as-code (IaC) concepts.
Excellent problem-solving skills, attention to detail, and the ability to work effectively in a fast-paced, collaborative environment.
Strong communication and interpersonal skills, with the ability to communicate complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams.
Preferred: Experience with cloud-based ETL tools (Azure Data Factory, Fivetran, Airbyte).
Preferred: Experience leveraging Data Transformation Tools (dbt, Matillion) and integrating them with Cloud Warehouses (Snowflake preferred).
Education and Experience:
Bachelor’s or master’s degree in computer science, Engineering, or a related field.
Minimum of 3-5 years in data engineering, database development, or related roles, focusing on designing and building data pipelines and infrastructure.
Physical Requirements:
Sitting: Prolonged periods of sitting are typical, often for the majority of the workday.
Keyboarding: Frequent use of a keyboard for typing and data entry.
Reaching: Occasionally reaching for items such as files, documents, or office supplies.
Fine Motor Skills: Precise movements of the fingers and hands for tasks like typing, using a mouse, and handling paperwork.
Visual Acuity: Good vision for reading documents, computer screens, and other detailed work.
Be part of an organization that invests in you! We are reviewing applications for this role and will contact qualified candidates for interviews.
Vital Care Infusion Services is an equal-opportunity employer and values diversity at our company. We do not discriminate on the basis of color, race, sex, age, religion, national origin, disability, genetic information, gender identity, sexual orientation, veterans’ status, or any other basis protected by applicable federal, state, or local law.
Vital Care Infusion Services participates in E-Verify.
This position is full-time.
Required profile
Experience
Spoken language(s):
English
Check out the description to know which languages are mandatory.