Match score not available

LLM Engineer/ Researcher

UNLIMITED HOLIDAYS - EXTRA HOLIDAYS - EXTRA PARENTAL LEAVE - LONG REMOTE PERIOD ALLOWED
Remote: 
Full Remote
Work from: 

Offer summary

Qualifications:

Experience with LLMs and training, Knowledge of quantization techniques and LLM development.

Key responsabilities:

  • Train and fine-tune LLM models
  • Design infrastructure for LLM deployment
  • Keep up with latest LLM research
Jan logo
Jan Startup https://jan.ai/
11 - 50 Employees
See more Jan offers

Job description

Logo Jobgether

Your missions

Jan is a productivity company. We build a cross-platform, local-first and AI native framework that can be used to build anything. This includes https://jan.ai/ a desktop app that runs AI on your own laptop, 100% offline and privately. We support most popular AI models, and are actively working on a roadmap to allow users to customize and fine-tune these AIs to meet their specific needs.

 

We are a fully remote, open source company. We target the global market, but operate as a lean, bootstrapped company.


Job Description

Jan is looking for a LLM Engineer/Researcher to continue to fine-tune and train our own LLM models.


Responsibilities

  • Train and Fine-tune foundational LLM models (e.g. PEFT, Lora, QLora, latest research techniques)
  • Build and maintain LLM applications and infrastructure to meet business needs
  • Design LLM inference infrastructure to scalably deploy LLMs within infrastructural constraints
  • Research and utilize best of class tools within LLM ecosystem (e.g. Vector databases, LlamaIndex, etc)
  • Keep up with latest research around LLMs (e.g. sparse models, hardware-specific LLMs)
  • Research and keep up with latest use-cases of LLMs (e.g. RAG, Agents, etc)
  • Collaborate closely with LLM research teams to participate in foundation model research, specifically for training productivity-related LLMs.


Requirements

  • Experience with LLMs, including popular foundation models like Llama2, MPT
  • Experience with Training and Fine-tuning foundational LLM models
  • Experience with quantization techniques, including llama.cpp, GPTQ, etc
  • Experience with LLM related development, e.g. Llamaindex, Langchain, Vector DBs, Prompt Engineering etc
  • [Plus but not required] Experience running LLMs in production (e.g. Triton Inference Server, etc)


Benefits

  • We pay an “all-in” pay and you will cover your own insurance/medical from the amount
  • 14 days leave (and unlimited sick days)
  • Annual equipment budget (once 2 month probation has been completed)

CompensationNegotiable

Required profile

Experience

Spoken language(s):
Check out the description to know which languages are mandatory.

Soft Skills

  • Collaboration
  • Analytical Skills

Machine Learning Engineer Related jobs