Match score not available

Research Engineer - Distributed Training

Remote: 
Full Remote
Contract: 
Salary: 
106 - 106K yearly
Experience: 
Mid-level (2-5 years)
Work from: 
California (USA), United States

Offer summary

Qualifications:

Strong background in AI/ML engineering, Experience in designing and implementing pipelines for large-scale AI models, Deep expertise in distributed training techniques and frameworks, Understanding of MLOps best practices, Passion for advancing decentralized AI model training.

Key responsabilities:

  • Lead research to build decentralized training orchestration solutions
  • Optimize performance, cost, and resource utilization of AI workloads
  • Contribute to open-source libraries for distributed model training
  • Publish research at top-tier AI conferences
  • Communicate technical project outcomes through blogs
Prime Intellect logo
Prime Intellect Small startup https://www.primeintellect.ai/
2 - 10 Employees
See more Prime Intellect offers

Job description

At Prime Intellect, we are on a mission to accelerate open and decentralized AI progress by enabling anyone to contribute compute, code or capital to train powerful, open models. Our ultimate goal? Openly accessible AGI that benefits everyone. But we can't do it alone and we want to do this together with you.

We are building the infrastructure for decentralized AI development at scale. We aggregate global compute and enable researchers to collaboratively train state-of-the-art models through distributed training across clusters.

As a Research Engineer working on Distributed Training, you'll play a crucial role in shaping our technological direction, focusing on our decentralizing AI training stack. If you love scaling things and maximizing training efficiency, this role is for you.

Responsibilities
  • Lead and participate in novel research to build a massive scale, highly reliable and secure decentralized training orchestration solution

  • Optimize the performance, cost, and resource utilization of AI workloads by leveraging the most recent advances for compute & memory optimization techniques.

  • Contribute to the development of our open-source libraries and frameworks for distributed model training.

  • Publish research in top-tier AI conferences such as ICML & NeurIPS.

  • Distill highly technical project outcomes in layman approachable technical blogs to our customers and developers.

  • Stay up-to-date with the latest advancements in AI/ML infrastructure and tools, decentralized training research and proactively identify opportunities to enhance our platform's capabilities and user experience.

Requirements
  • Strong background in AI/ML engineering, with extensive experience in designing and implementing end-to-end pipelines for training and deploying large-scale AI models.

  • Deep expertise in distributed training techniques, frameworks (e.g., PyTorch Distributed, DeepSpeed, MosaicML’s LLM Foundry), and tools (e.g. Ray) for optimizing the performance and scalability of AI workloads.

  • Experience in large-scale model training incl. distributed training techniques such as data, tensor & pipeline parallelism

  • Solid understanding of MLOps best practices, including model versioning, experiment tracking, and continuous integration/deployment (CI/CD) pipelines.

  • Passion for advancing the state-of-the-art in decentralized AI model training and democratizing access to AI capabilities for researchers, developers, and businesses worldwide.

  • If you're not familiar with these, but feel like that you can contribute to our mission and you're a high-energy person, get familiar with these resources (here, here and here) and please reach out!

Benefits & Perks
  • Competitive compensation, including equity and token incentives, aligning your success with the growth and impact of Prime Intellect.

  • Flexible work arrangements, with the option to work remotely or in-person at our offices in San Francisco.

  • Visa sponsorship and relocation assistance for international candidates.

  • Quarterly team off-sites, hackathons, conferences and learning opportunities.

  • Opportunity to work with a talented, hard-working and mission-driven team, united by a shared passion for leveraging technology to accelerate science and AI.

We raised a $5.5 million seed round from an incredible group of investors including Clem from HuggingFace and Dylan Patel from SemiAnalysis.

If you're excited about the opportunity to build the foundation for the future of decentralized AI and create a platform that empowers developers and researchers to push the boundaries of what's possible, we'd love to hear from you.

Required profile

Experience

Level of experience: Mid-level (2-5 years)
Industry :
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Energetic

Related jobs