Senior Machine Learning Engineer

Remote: 
Full Remote
Contract: 

Offer summary

Qualifications:

4+ years of experience building and operating backend systems in production, Strong proficiency with Python, FastAPI, and Pydantic, Solid understanding of microservice architecture and scalable distributed systems, Hands-on experience with prompt engineering in production..

Key responsibilities:

  • Build and own LLM-powered back end services ensuring scalability and observability
  • Design infrastructure for rapid experimentation with LLMs, including A/B testing and usage analytics
  • Integrate and maintain LLM observability tooling to monitor model performance
  • Collaborate with Data and ML Scientists to productionize workflows and improve experimentation speed.

Lokalise logo
Lokalise Professional Services Scaleup https://lokalise.com/
201 - 500 Employees
See all jobs

Job description

Who are we

At Lokalise, we make it easy and profitable for businesses to expand into new markets. Founded in 2017, our AI-powered translation and localization platform automates workflows, integrates with over 60 tools, and helps product teams launch multilingual products 10x faster and at 80% lower cost. Trusted by thousands of businesses across over 100 countries, Lokalise is empowering more than 25 million people worldwide to use diverse services in their native languages. Backed by a customer-loved support team, our platform seamlessly fits into your design and development processes, helping you scale effortlessly.

Location

While our company operates exclusively on a remote basis, you must reside and have the legal right to work in one of the following countries: the United Kingdom, Latvia, Spain, Germany, Denmark, Poland, Portugal, or Ireland.

 

About

We’re looking for a Senior Machine Learning Engineer to join our growing AI team. You’ll be the technical owner of the systems that power LLM-based localization features — designing reliable, scalable, and observable services from the ground up. You’ll also partner across disciplines to support ML operations and data infrastructure — enabling experimentation and continuous improvement.

This is a role for someone who thrives in complex systems, has a drive for engineering excellence, and enjoys supporting others to succeed.

 
You will
  • Build and own LLM-powered back end services using FastAPI, Pydantic, etc. — ensuring they are scalable, observable, and easy to extend
  • Design infrastructure that enables rapid experimentation with LLMs, including A/B testing, feature flagging, and usage analytics
  • Integrate and maintain LLM observability tooling (e.g., LiteLLM, Langfuse) to monitor quality, cost, and performance of model calls
  • Collaborate with Data and ML Scientists to productionize workflows, share feedback, and continuously improve experimentation speed
  • Ensure systems are reliable and deployable with strong CI/CD practices, including instrumentation and alerting
  • Contribute to team culture through pairing, mentoring, and sharing learnings to help others grow
  • Stay connected to the product by understanding our users' needs, localization workflows, and broader industry trends

 

You Must Have
  • 4+ years of experience building and operating backend systems in production
  • Strong proficiency with Python, FastAPI, and Pydantic
  • Solid understanding of microservice architecture, scalable distributed systems, and observability, with familiarity using tools like OpenTelemetry, Grafana, or Datadog to monitor both general system health and LLM-specific metrics like latency, token usage, and model performance
  • Hands-on experience with prompt engineering in production
  • Familiarity with CI/CD, containerisation, and cloud deployment
  • Strong sense of ownership
 
It will be considered a significant advantage if you bring 

(These are not required but will help you hit the ground running.)

  • Experience with LangGraph, multi-modal LLMs and familiarity with tools for LLM integration and observability (e.g. LiteLLM, Langfuse, PromptLayer, or WhyLabs) 
  • Experience with Lightdash, Snowflake, or modern BI tooling
  • Working knowledge of TypeScript, particularly for building or maintaining back end services (e.g., Node.js or serverless APIs)
  • Understanding of translation quality metrics (BLEU, chrF, COMET, MQM, METEOR)
  • Previous experience with MLOps principles and tooling (e.g. MLflow, Kedro) and/or ML platforms (e.g. SageMaker, Vertex AI)

 

Our Benefits
  • Competitive salary and employee stock options plan
  • Fully remote and flexible working hours 
  • Co-working budget
  • Flexible vacation policy
  • Equipment budget to set up your home office
  • Learning & Development program
  • Health insurance
  • Wellness benefits
  • Great startup atmosphere, team spirit, and team events

 

We are committed to a culture of inclusion and equal opportunities. Therefore, we welcome applications from people of all gender identities, sexual orientations, personal expressions, relationship, marital, or civil partnership statuses, racial identities, national or ethnic origins, religious beliefs, ages, and disability statuses. 

Required profile

Experience

Industry :
Professional Services
Spoken language(s):
English
Check out the description to know which languages are mandatory.

Machine Learning Engineer Related jobs