Enterprise Data Architect

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Proficiency in programming languages such as Java or Python and familiarity with Big Data frameworks like Kafka., Experience in architecting and developing distributed applications in cloud environments (PaaS/SaaS) including Azure, GCP, and AWS., Expertise in optimizing production-grade data pipelines for data analysis and modeling., Knowledge of SQL and NoSQL databases, along with experience in graph databases..

Key responsibilities:

  • Develop and implement the Data Strategy and Roadmap for clients.
  • Guide the architecture of the Data Catalog and cloud-based CMS.
  • Lead the implementation of security architecture within the data infrastructure.
  • Integrate appropriate graph databases into the existing data systems.

Saviance Technologies Pvt. Ltd. logo
Saviance Technologies Pvt. Ltd. SME https://saviance.com/
51 - 200 Employees
See all jobs

Job description

 Role: Enterprise Data Architect
Duration: 6+ Months
Location: Boston, MA (100% Remote)


About BigRio: BigRio is a remote technology consulting firm headquartered in Boston. We deliver a range of solutions including custom machine learning/AI integrations and data warehousing and processing solutions. Our comprehensive approach serves clients from a variety of industries as a result of our ability to consistently, and quickly deliver cutting-edge and cost-conscious software solutions.

You will join our team as an Enterprise Data Architect who will work with our clients.

Responsibilities:
Create the Data Strategy and Roadmap
Guide the architecture of the Data Catalog
Lead the architecture for the cloud based CMS
Implement the appropriate Graph database into the data infrastructure
Lead the Security Architecture and implementation

Minimum Job Qualifications:
Programming in at least one modern language (Java, Python ) and Big Data frameworks (Kafka)
Architecting and developing highly reliable, fault-tolerant distributed applications with focus on performance and scale in the Cloud, including PaaS/SaaS, Azure, GCP and AWS
Experience architecting and optimizing production grade data pipelines (stream processing and batch) to prepare datasets at scale for data analysis, modeling, and optimization
Experience working with Big Data components such as Spark, Storm, Flink, Kinesis, Impala, Hive
Experience working with SQL and No SQL databases
Graph database expertise.
Demonstrated understanding of fast-paced Agile principles with technical designs, iterative development, and code reviews

Preferred Job Qualifications:
Direct experiences with Azure and AWS
Hands-on experiences with web analytics, 3rd party augmentation data, and A/B testing is a plus
Experience in hosting, provisioning and maintaining database servers in cloud environment.
Higher Education experience.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Teamwork

Data Architect Related jobs