Title:
AI Red Teaming Engineer
Job Type:
Contract
Contract Length:
12 Months
Pay Range:
$50/hr – $175/hr
Start Date:
ASAP
Location:
Remote
About the Opportunity:
Our client, a leader in AI testing and Generative AI solutions, is looking for a skilled AI Red Teaming Engineer
to join their team for a 12-month engagement. This project involves performing adversarial attacks and vulnerability testing to identify security risks and logic holes in internal AI models and agent systems. This is a high-impact role that requires a self-motivated professional who can hit the ground running and deliver results quickly.
Key Responsibilities & Deliverables:
This role is focused on the successful completion of specific tasks and deliverables. Your responsibilities will include:
- Performing adversarial attacks (prompt injection, jailbreaking) to identify security vulnerabilities in internal AI models.
- Developing automated probing scripts to test model safety guardrails at scale.
- Mapping and reporting potential "logic holes" where AI agents could be tricked into performing unauthorized actions.
- Collaborating with security teams to implement hardening strategies against data poisoning and model inversion.
- Creating "red team reports" that prioritize risks for engineering teams to remediate.
We are looking for someone with a proven track record of successful contract engagements. The ideal candidate will have:
- 4+ years of experience in Cybersecurity or Penetration Testing.
- Deep expertise in LLM security risks (OWASP Top 10 for LLMs) and prompt engineering. This isn't a learning role—you need to be a subject matter expert.
- Demonstrated ability to work autonomously and manage your own time effectively to meet project goals.
- Experience with Python scripting, Linux, and cloud security protocols.
- Strong communication skills to provide clear and concise status updates to the project team.
#LI-MM
#LI-CR





