YO IT Consulting
Python Developer - Ai Training
YO IT ConsultingArgentina21 hours ago
Full-timeRemote FriendlyEngineering, Information Technology
Work Mode: Remote

Engagement Type: Independent Contractor

Schedule: Full-Time or Part-Time Contract

Language Requirement: Fluent English

Role Overview

We partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems.

This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels.

The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts.

What You’ll Do

Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness

Conduct fact-checking using trusted public sources and authoritative references

Conduct accuracy testing by executing code and validating outputs using appropriate tools

Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies

Assess code quality, readability, algorithmic soundness, and explanation quality

Ensure model responses align with expected conversational behavior and system guidelines

Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines

Who You Are

You hold a BS, MS, or PhD in Computer Science or a closely related field

You have significant real-world experience in software engineering or related technical roles

You are an expert in at least one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust)

You are able to solve HackerRank or LeetCode Medium and Hard-level problems independently

You have experience contributing to well-known open-source projects, including merged pull requests

You have significant experience using LLMs while coding and understand their strengths and failure modes

You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws

Nice-to-Have Specialties

Prior experience with RLHF, model evaluation, or data annotation work

Track record in competitive programming

Experience reviewing code in production environments

Familiarity with multiple programming paradigms or ecosystems

Experience explaining complex technical concepts to non-expert audiences

What Success Looks Like

You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions

Your feedback improves the correctness, robustness, and clarity of AI coding outputs

You deliver reproducible evaluation artifacts that strengthen model performance

Key Skills

Ranked by relevance