Match score not available

AWS Data Engineer

Remote: 
Full Remote
Experience: 
Senior (5-10 years)
Work from: 

Offer summary

Qualifications:

Bachelor's Degree in Computer Science or a related field with 3-10 years of experience., Experience working on AWS cloud infrastructure., Proficiency in SQL, data engineering tools, and ETL processes., Knowledge of Data Warehousing, Data Lake, and analytic methodologies..

Key responsabilities:

  • Design data engineering tools to manage large volumes of data.
  • Execute the complete data engineering lifecycle including data model development and optimization.
  • Optimize data pipelines using stream and batch processing frameworks.
  • Implement security measures, disaster recovery, and service reliability best practices.
Resource Informatics Group, Inc logo
Resource Informatics Group, Inc SME https://www.rigusinc.com/
51 - 200 Employees
See more Resource Informatics Group, Inc offers

Job description

Logo Jobgether

Your missions

Description:



Infosys is looking for self-motivated data engineers to join the Data & Analytic team. As a consultant in Data Science, you will have an opportunity to build data platform for our global clients, work extensively on AWS and Data Engineering for our US based Utilities clients. You'll work on all aspects of big fast data systems, state-of-the-art cloud infrastructure, Analytics and cutting-edge algorithms.



Required Qualifications:

  • Design and implement modern data engineering tools to manage petabytes of data using offline/online data integrations.
  • Execution of entire data engineering life cycle steps across phases of Problem Formulation, Data acquisition and assessment, Data Ingestion, Feature selection and engineering, Data Model development and fine-tuning, performance measurement, right up to the delivery of Consumption Module (Application/ Dashboard/ API)
  • Experience with Data Warehousing, Data Lake, Analytic processes, and methodologies.
  • Proficient in writing and optimizing SQL queries and other procedures/scripts
  • Build highly optimized and scalable data pipelines (ETL) using batch and stream processing frameworks (Spark)
  • Strong knowledge of data integration (ETL/ELT), data quality and multi-dimensional
  • Build monitoring and alerting dashboards to identify operational issues in data pipelines
  • Optimize data and compute capacity resources to meet on-demand infrastructure scaling needs, improve cluster utilization and meet application SLAs
  • Work on security compliance and contribute to data and service access controls
  • Implement best practices for disaster recovery and service reliability

Mandatory skills:

  • Amazon Redshift, AWS Glue, AWS CloudTrail, Python, SQL Amazon Kinesis Data Streams, S3 Standard, Amazon S3 Glacier, AWS Key Management Service, No SQL , SQL





Preferred Qualifications:



  • Bachelor's Degree in Computer Science or a related technical field, and 3 – 10 years of relevant employment experience
  • Prior experience working with AWS cloud infrastructure

· Work experience with ETL, Data Modeling, and Data Architecture.

· Expert-level skills in writing and optimizing SQL.

  • Experience with Big Data technologies such as Hadoop/Hive/Spark.

Required profile

Experience

Level of experience: Senior (5-10 years)
Spoken language(s):
Check out the description to know which languages are mandatory.

Data Engineer Related jobs