Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are building a dedicated AI Red Team to rigorously test and harden enterprise-scale AI products.
We are looking for an adversarial machine learning specialist who thinks like an attacker.
This role focuses on identifying vulnerabilities in LLM-driven systems, breaking model guardrails, exploiting data pathways, and stress-testing AI deployments before they reach enterprise customers.
This is a hands-on technical role at the core of AI security.
What You'll Do
- Conduct adversarial testing across LLM and AI-based systems
- Execute real-world attack simulations, including:
- Prompt injection
- Jailbreaking and guardrail bypass
- Data exfiltration attempts
- Model inversion and evasion techniques
- RAG manipulation
- Develop scripts and tooling to automate attack scenarios
- Analyse model behaviour under adversarial pressure
- Identify systemic vulnerabilities in:
- APIs
- Embedding pipelines
- Vector databases
- Fine-tuned model implementations
- Collaborate with engineering teams to validate remediation
- Document findings clearly and concisely
Requirements
What We're Looking For
Core Technical Skills
- Strong experience in adversarial ML or AI security research
- Experience working with LLM-based systems (OpenAI, Anthropic, open-source models, etc.)
- Deep understanding of:
- Prompt injection techniques
- Model jailbreak methodologies
- AI system exploitation vectors
- Strong Python skills
- Experience building custom attack tooling or experimentation frameworks
- Familiarity with:
- RAG architectures
- Vector databases
- Model fine-tuning workflows
- API-based model deployments
- Understanding of model safety mechanisms and guardrails
- Background in cybersecurity or penetration testing
- Familiarity with OWASP LLM Top 10
- Experience working in enterprise environments
- Curious and relentless
- Comfortable thinking like an attacker
- Creative in finding non-obvious vulnerabilities
- Detail-oriented but fast-moving
- Comfortable operating in ambiguity
- Independent but collaborative
Benefits
- Comprehensive Private Medical Coverage
- Support for Mental Health Expenses
- Life Insurance Options
- Attractive Compensation Package
Key Skills
Ranked by relevanceReady to apply?
Join C-Serv and take your career to the next level!
Application takes less than 5 minutes

