Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
The Mindrift platform, launched and powered by Toloka, connects domain experts with cutting-edge AI projects from innovative tech clients. Our mission is to unlock the potential of GenAI by tapping into real-world expertise from across the globe.
We're on the hunt for hands-on Python engineers for a new project focused on developing Model Context Protocol (MCP) servers and internal tools for running and evaluating agent behavior. You'll implement base methods for agent action verification, integrate with internal and client infrastructures, and help fill tooling gaps across the team.
Who we're looking for:
Calling all security researchers, engineers, and penetration testers with a strong foundation in problem-solving, offensive security, and AI-related risk assessment.
We're looking for someone who can bring a hands-on approach to technical challenges, whether breaking into systems to expose weaknesses or building secure tools and processes. We value contributors with a passion for continuous learning, experimentation, and adaptability.
What you'll be doing:
- Developing and maintaining MCP-compatible evaluation servers
- Implementing logic to check agent actions against scenario definitions
- Creating or extending tools that writers and QAs use to test agents
- Working closely with infrastructure engineers to ensure compatibility
- Occasionally helping with test writing or debug sessions when needed
Requirements
- 4+ years of Python development experience, ideally in backend or tools
- Solid experience building APIs, testing frameworks, or protocol-based interfaces
- Understanding of Docker, Linux CLI, and HTTP-based communication
- Ability to integrate new tools into existing infrastructures
- Familiarity with how LLM agents are prompted, executed, and evaluated
- Clear documentation and communication skills - you'll work with QA and writers
- Experience with Model Context Protocol (MCP) or similar structured agent-server interfaces
- Knowledge of FastAPI or similar async web frameworks
- Experience working with LLM logs, scoring functions, or sandbox environments
- Ability to support dev environments (devcontainers, CI configs, linters)
- JS experience
- Get paid for your expertise, with rates that can go up to $47/hour depending on your skills, experience, and project needs
- Take part in a flexible, remote, freelance project that fits around your primary professional or academic commitments
- Participate in an advanced AI project and gain valuable experience to enhance your portfolio
- Influence how future AI models understand and communicate in your field of expertise
Key Skills
Ranked by relevanceReady to apply?
Join Mindrift and take your career to the next level!
Application takes less than 5 minutes