Crossing Hurdles
.AI Red Team Analyst (LLM Safety & Adversarial Testing) | $28.74/hr Remote
Crossing HurdlesBrazil2 days ago
ContractRemote FriendlyResearch, Training +1

Position: AI Red-Teamer — Adversarial AI Testing (Advanced) | English & Brazilian Portuguese

Type: Hourly contract (Full-time or Part-time)

Compensation: $28.74/hour

Location: Remote


Role Responsibilities

  • Red team conversational AI models and agents (jailbreaks, prompt injections, misuse cases, bias exploitation, multi-turn manipulation).
  • Generate high-quality human data by annotating failures, classifying vulnerabilities, and flagging systemic risks.
  • Apply structured taxonomies, benchmarks, and playbooks to ensure consistent adversarial testing.
  • Produce reproducible reports, datasets, and attack cases customers can act on.
  • Identify vulnerabilities missed by automated evaluation systems.


Requirements

  • Native-level fluency in English and Brazilian Portuguese (required).
  • Prior experience in AI red teaming, adversarial testing, cybersecurity, or socio-technical risk analysis.
  • Strong adversarial mindset with structured, methodical testing approaches.
  • Clear written communication for technical and non-technical stakeholders.
  • Comfortable reviewing sensitive AI-generated content (guidelines and wellness support provided).


Application process: (Takes 20 min)

  • Upload resume
  • Interview (15 min)
  • Submit form

Key Skills

Ranked by relevance