Help us maintain the quality of our job listings. If you find any issues with this job post, please let us know.
Select the reason you're reporting this job:
Billennium is a global IT services and solutions provider. Established in 2003, we have been in the IT services and solutions industry since then – we develop as the technology develops, delivering our clients the best-in-class solutions.
Having 11 offices on 3 continents, our 1800+ IT experts work in a follow-the-sun (24/7/365) model to deliver the highest quality IT solutions and services for businesses around the globe, helping our clients in building strong competitive advantage with technology.
Billennium is driven by purpose, yet powered by technology partnerships – Microsoft, Google, AWS, Salesforce, Mulesoft, Tableau, and more. Official partnerships confirm our expertise in delivering tailor-made, multi-cloud, and cutting-edge technology IT solutions and services.
Security and stability of our clients' business are crucial, therefore we implemented several ISO standards to strengthen these vital values – ISO 9001, ISO 27001, and ISO 20000-1 certificates.
We are a global IT services and solutions provider, with over 117 satisfied corporate, public, and government clients around the globe, including those from the regulated industries and critical systems at the government level. You can trust us!
As a Data Engineer, you’ll be responsible for designing, building, and optimizing scalable data pipelines and storage solutions using Azure tools, supporting analytics and machine learning needs.
Key Responsibilities:
Cloud Platform: Build and manage data solutions on Azure (Data Factory, Databricks, SQL Database, Synapse).
Data Processing & Storage: Work with Databricks (Delta Tables), ADLS, SQL Server, and Netezza to ensure efficient data handling.
Data Integration: Develop data pipelines with Azure Data Factory, with a focus on governance and compliance.
Programming & Transformation: Use Python and PySpark to create high-performance data transformations.
Data Visualization: Support data insights with Power BI and Qlik dashboards.
DevOps & CI/CD: Implement DevOps best practices, version control, and CI/CD with Git and Azure DevOps.
Agile Practices & Modeling: Work within Agile frameworks (Scrum, Kanban) and apply modern data modeling.
Qualifications:
3+ years (mid) or 5+ years (senior) in data engineering, with proficiency in Azure, Python, PySpark.
What we offer:
Involvement in dynamic and interesting projects.
A collaborative and supportive team culture.
A flexible work environment.
A comprehensive benefits package tailored to your preferences.
Sounds interesting? Click “Apply” and have chance to hear more!
Required profile
Experience
Industry :
Information Technology & Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.