Match score not available

AI Security Analyst at Mindgard

Remote: 
Full Remote
Contract: 
Work from: 

Offer summary

Qualifications:

Domain expert in application security., Experienced with security processes and tools., Comfortable writing vulnerability disclosures., Knowledge of AI security is preferred..

Key responsabilities:

  • Adding security intelligence for AI vulnerabilities.
  • Joining customer meetings to address their AI security concerns.
Mindgard logo
Mindgard
11 - 50 Employees
See more Mindgard offers

Job description

About Mindgard

Mindgard is a London-based startup specializing in AI security.

Our mission is to secure the future of AI against cyber attacks. We’ve spun-out from a leading UK university after a decade of R&D, and are among the first few companies globally to offer solutions to this rapidly growing problem.

The Role

We’re seeking an experienced Security Analyst who is passionate about helping security teams and developers keep AI powered systems secure. Prior AI experience is useful, but not essential.

Today’s software often benefits from AI components. AI also introduces new security risks to control. Security teams need visibility, help triaging, and assistance mitigating the new threats introduced by AI.

You will provide actionable analysis for AI vulnerabilities for information security professionals, including: simple explanations, severity analysis, threat models, proofs of concept for use in attacks, and mitigations.

As part of our collaborative and friendly AI Security R&D team, you’ll build the AI security intelligence and detection techniques that power Mindgard’s products. You’ll work closely with experts in AI security vulnerabilities and red teaming techniques.

What you will be doing:
  • Adding security intelligence for the latest AI security vulnerabilities to the Mindgard product.
  • Making cutting-edge AI security research actionable for security teams.
  • Spotting, validating, and triaging new emerging AI security threats from the community.
  • Developing proofs of concept for potential security vulnerabilities.
  • Responsibly disclosing vulnerabilities with AI vendors, builders, and the open source community.
  • Joining customer meetings to understand their AI security concerns.
  • Advising the product engineering team on security teams requirements and the AI security domain.
  • Writing, editing, and presenting content that helps the community respond to AI security threats.
  • Researching new AI security vulnerabilities and attack techniques.
We’re looking for people who are:
  • Kind, to collaborate effectively towards the highest quality outcomes.
  • Passionate about our mission to help security teams with AI security risks.
  • Curious, to deepen your understanding of AI security.
  • Pragmatic, helping our customers make the best security tradeoffs.
You’ll need to be:
  • A domain expert in the application security field, including common vulnerabilities such as XSS, SSRF, RCE, SQL Injection, Deserialization, etc.
  • Experienced with security team processes, practices such as threat modeling, and use of tooling such as SAST/DAST/SCA/CSPM/ASPM.
  • Comfortable writing vulnerabilities disclosures and crafting security exploits.
  • Familiar with responsible disclosure processes.
  • Capable of writing code and configuring systems to automate your work and produce proof of concepts.
You’ll stand out if you have:
  • Expertise in AI or AI security.
  • Worked in a SaaS product startup.

Required profile

Experience

Spoken language(s):
English
Check out the description to know which languages are mandatory.

Other Skills

  • Collaboration
  • Curiosity

Security Analyst Related jobs