The Opportunity
AI is rapidly transforming the world. Whether it’s developing the next generation of human-level intelligence, enhancing voice assistants, or enabling researchers to analyze genetic markers at scale, AI is increasingly integrated into various aspects of our daily lives.
Arize is the leading AI observability and Evaluation platform to help AI teams discover issues, diagnose problems, and improve the results of their AI Applications. We are here to build world class software that helps make AI applications work better.
We’re looking for an Open Source AI Engineer to join our growing OSS team to drive the development of new frameworks, metrics, and tooling that help people build, test, and improve LLM tasks. You’ll play a lead role in shaping how developers measure and understand performance in advanced AI systems, all in the open.
What You’ll Work On
- Build LLM Eval Frameworks: Design, architect, and open-source new libraries, pipelines, and APIs that make it simpler to evaluate LLM output quality, consistency, and reliability at scale.
- Define Metrics and Benchmarks: Curate golden datasets and develop robust benchmarked metrics that guide data scientists and AI practitioners in optimizing their AI tasks.
- Collaborate with the Community: Partner closely with the broader AI open source ecosystem, gather feedback, review pull requests, and steer the direction of the project to address real developer needs.
- Prototype and Iterate Rapidly: Experiment with state-of-the-art LLM techniques, turning research into practical developer tooling.
- Improve Observability and Debugging: Integrate with our existing platform to surface deeper insights on LLM behavior—help teams quickly diagnose and fix issues such as hallucinations or bias.
- Educate and Evangelize: Write blog posts, white papers, tutorials, and documentation to help developers succeed with our open source tools and grow the LLM eval community.
What We’re Looking For
We’re looking for an engineer who’s deeply passionate about AI, loves working in the open, and thrives in a fast-paced environment where “everyone wears multiple hats”. You likely share our core values:
- Open Source Champion: You believe collaboration and community-driven development unlocks the best innovations.
- Creative Problem Solver: You enjoy tackling ambiguous challenges and finding elegant technical solutions.
- Data & Metrics Driven: You value empirical results, enjoy creating or refining evaluation metrics, and iterate based on real-world feedback.
- Technically Curious: You’re always learning—exploring new LLM architectures, prompt engineering strategies, or emerging library standards.
- Builder Mindset: You relish the process of taking ideas from initial prototypes to production-ready solutions that delight users.
Desired Skills & Experience
- Hands-on LLM Experience: Familiarity with popular LLM frameworks, prompt engineering techniques, and model fine-tuning.
- Strong Programming Skills: Fluent in Python for AI workflows; bonus if you can navigate TypeScript as well.
- Evaluation Knowledge: Understanding of core NLP evaluation methods and experience applying or extending them for LLM systems.
- Open Source Track Record: Contributions to open source projects, personal GitHub repos with interesting AI demos, or a history of active engagement in developer communities.
- ML Observability & Tools: Familiarity with debugging AI applications, exploring embeddings, or building data-heavy dashboards is a plus.
Why Work With Us
- Shape the Future of AI Evaluation: Be at the forefront of designing new ways to measure and improve next-generation LLMs.
- High Impact, Real Ownership: Join a team that values autonomy and speed. You’ll drive major initiatives from day one and see your work used by developers worldwide.
- Fully Remote, Flexible Environment: We are a fully remote company with offices in the Bay Area and NYC for those who prefer in-person collaboration.
- Cutting-Edge Challenges: Our platform already helps analyze millions of AI predictions daily, giving you the chance to refine your evaluation tooling on real, large-scale production workloads.
- Work With a Talented, Passionate Team: Collaborate closely with top engineers who are dedicated to making AI more transparent, reliable, and impactful.
The estimated annual salary and variable compensation for this role is between $150,000 - $185,000, plus a competitive equity package. Actual compensation is determined based upon a variety of job related factors that may include: transferable work experience, skill sets, and qualifications. Total compensation also includes a comprehensive benefit package, including: medical, dental, vision, 401(k) plan, unlimited paid time off, generous parental leave plan, and others for mental and wellness support.
While we are a remote-first company, we have opened offices in New York City and the San Francisco Bay Area, as an option for those in those cities who wish to work in-person. For all other employees, there is a WFH monthly stipend to pay for co-working spaces.
More About Arize
Arize’s mission is to make the world’s AI work and work for the people. Our founders came together through a common frustration: investments in AI are growing rapidly across businesses and organizations of all types, yet it is incredibly difficult to understand why a machine learning model behaves the way it does after it is deployed into the real world.
Learn more about Arize in an interview with our founders: https://www.forbes.com/sites/frederickdaso/2020/09/01/arize-ai-helps-us-understand-how-ai-works/#322488d7753c
Diversity & Inclusion @ Arize
Our company's mission is to make AI work and make AI work for the people, we hope to make an impact in bias industry-wide and that's a big motivator for people who work here. We actively hope that individuals contribute to a good culture
- Regularly have chats with industry experts, researchers, and ethicists across the ecosystem to advance the use of responsible AI
- Culturally conscious events such as LGBTQ trivia during pride month
- We have an active Lady Arizers subgroup