Weekday AI (YC W21)
AI Red-Teamer - Adversarial AI Testing English
Weekday AI (YC W21)United Kingdom1 day ago
Part-timeRemote FriendlyOther
This role is for one of our clients

Compensation: $50-$111 per hour

We are seeking AI Red-Teamers to help test and strengthen modern AI systems through adversarial evaluation. In this role, you will challenge AI models with carefully designed inputs to uncover weaknesses, surface vulnerabilities, and generate high-quality data that improves the safety, reliability, and robustness of conversational AI.

This work focuses on proactively identifying potential risks before they appear in real-world use. By systematically probing AI systems, you will help ensure they respond safely, accurately, and responsibly across a wide range of scenarios.

This role may include reviewing AI outputs that reference sensitive topics such as bias, misinformation, or harmful behaviors. All work is text-based, and participation in higher-sensitivity projects is optional and supported with clear guidelines and wellness resources.

Requirements

What You'll Do

  • Red-team AI models and agents by testing jailbreak attempts, prompt injections, misuse scenarios, and exploit strategies
  • Generate high-quality human evaluation data by annotating model failures, classifying vulnerabilities, and identifying systemic risks
  • Apply structured testing methodologies using taxonomies, benchmarks, and playbooks to ensure consistent evaluation
  • Document findings clearly and reproducibly, producing reports, datasets, and adversarial test cases that teams can act upon
  • Work across multiple projects, supporting different AI systems and evaluation objectives

Who You Are

  • You have prior red-teaming experience, such as adversarial AI testing, cybersecurity, or socio-technical risk analysis
  • You naturally think adversarially, exploring ways to push systems to their limits and uncover weaknesses
  • You prefer structured methodologies, using frameworks and benchmarks rather than ad-hoc testing
  • You communicate risks and vulnerabilities clearly to both technical and non-technical audiences
  • You are comfortable working across multiple projects and adapting to new evaluation challenges

Nice-to-Have Specialties

  • Adversarial Machine Learning: jailbreak datasets, prompt injection attacks, RLHF/DPO vulnerabilities, or model extraction techniques
  • Cybersecurity: penetration testing, exploit development, reverse engineering
  • Socio-technical risk analysis: harassment or misinformation testing, abuse pattern analysis
  • Creative adversarial thinking: backgrounds in psychology, acting, writing, or other disciplines that support unconventional attack strategies

What Success Looks Like

  • You uncover vulnerabilities and failure modes that automated tests miss
  • Your work produces reproducible artifacts and datasets that improve AI system resilience
  • Evaluation coverage expands with more realistic adversarial scenarios tested before deployment
  • AI systems become safer and more reliable due to your rigorous testing and insights

Why Join

  • Contribute directly to frontier work in AI safety and adversarial testing
  • Help improve the robustness, safety, and trustworthiness of modern AI systems
  • Gain hands-on experience working with human data-driven AI evaluation methodologies

Compensation may vary depending on the project, customer requirements, level of expertise, and content sensitivity involved in each engagement.

Contract and Payment Terms

  • Engagement will be as an independent contractor
  • This is a fully remote role that can be completed on your own schedule
  • Projects may be extended, shortened, or concluded early depending on project needs and performance
  • Work performed will not involve access to confidential or proprietary information from any employer, client, or institution
  • Payments are issued weekly via Stripe or Wise based on services rendered

Please note: Candidates requiring H1-B or STEM OPT sponsorship cannot be supported for this role at this time.

Key Skills

Ranked by relevance