Match score not available

Lead Software Engineer (Big Data and Azure)

Remote: 
Full Remote
Experience: 
Mid-level (2-5 years)
Work from: 

Offer summary

Qualifications:

8-12 years experience in Big Data & related tech, Hands-on experience with Apache Spark & Azure components, Strong programming skills in Python.

Key responsabilities:

  • Design & implement Big Data solutions
  • Create Big Data pipelines with Azure components
PradeepIT Consulting Services Pvt Ltd logo
PradeepIT Consulting Services Pvt Ltd
51 - 200 Employees
See more PradeepIT Consulting Services Pvt Ltd offers

Job description

Accelerate your career with PradeepIT

PradeepIT is one of the largest, globally recognized IT Consulting firm to connect India's deeply vetted talent team to global customer.

Were headquartered in Bengaluru, Silicon Valley of India. PradeepITs customers include SAP Lab, Bosch, Rolls-Royce, Daikin, Daimler and J&J and hundreds of other Fortune 500 companies and fast-growing startups.

With continuous hard work and working remotely by choice, PradeepIT is certified as a Great Place to Work! Trusted by leading brands and fortune 500 companies from around the world, we have achieved:

6+ Years of Experience

580+ Open source technology Consultant

120+ SAP Consultant

40+ Salesforce Consultant

60+ Adobe Consultant

100+ Mobility Consultant

890+ Clients in APAC, EMEA & USA

Our Beliefs

PradeepIT believes in connecting people across the globe and provide them an opportunity work on remotely. Being a people-first organization, PradeepIT constantly strives for individuals who won't just keep up, but break new ground, work with cutting edge technology and ramp-up their skills with course created by our Vertical Heads, Senior Architect for freely with help of PradeepIT Academy.

Requirements

  • 8 to 12 years in Big Data & Data related technology experience
  • Expert level understanding of distributed computing principles
  • Expert level knowledge and experience in Apache Spark
  • Hands on experience in Azure Databricks, Data Factory, Data Lake store/Blob storage, SQL DB
  • Experience in creating Big data Pipelines with Azure components
  • Hands on programing with Python
  • Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop
  • Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming
  • Experience with messaging systems, such as Kafka or RabbitMQ
  • Good understanding of Big Data querying tools, such as Hive, and Impala
  • Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files
  • Good understanding of SQL queries, joins, stored procedures, relational schemas
  • Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
  • Knowledge of ETL techniques and frameworks
  • Performance tuning of Spark Jobs
  • Experience with designing and implementing Big data solutions
  • Practitioner of AGILE methodology


Technologies

  • Big Data
  • Azure
  • Spark
  • Python
  • Azure Databricks
  • Azure Data factory

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Software Engineer Related jobs