GM2
AI Engineer
GM2Argentina17 days ago
Full-timeInformation Technology
Responsibilities
  • Integrate Generative AI models, including LLMs, with external APIs, tools, and databases using secure and efficient orchestration patterns.

  • Design, develop, and deploy AI workflows and Agentic AI solutions, enabling the seamless orchestration of intelligent agents to plan and perform tasks while leveraging autonomous and/or human-in-the-loop paradigms.

  • Implement and optimize multi-agent systems, leveraging standards and protocols such as Model Context Protocol (MCP) and emerging frameworks for agent interoperability.

  • Develop evaluation frameworks, metrics, and checkpoints for agent autonomy, performance, and safety, ensuring compliance with moderation, security, and ethical standards.

  • Ensure robust AI agent operations by applying observability, monitoring, and MLOps best practices, facilitating reliable deployment pipelines and continuous performance optimization.

  • Collaborate closely with data experts, orchestrating AI model selection, tuning, and performance validation to meet specific agent-based application needs.

  • Communicate complex AI concepts, systems, and decisions effectively to technical and non-technical stakeholders, promoting transparency and trust in AI delivery.




Requirements

  • Proven experience designing and deploying AI architectures, with expertise in Generative AI, NLP, LLM integration, and software engineering.

  • Strong background in building software platforms (Python/Django, Java/Spring, TypeScript/Express, etc.) capable of API integration and orchestration.

  • Strong understanding of the trade-offs between various generative AI models and the ability to choose the right model for specific use cases.

  • Hands-on experience with function-calling and tools integration into LLM models, leveraging frameworks such as Model Context Protocol (MCP).

  • Expertise in data embeddings, vector databases, and chunking strategies, understanding the trade-off between different options, and leveraging it to optimize data ingestion and application performance.

  • Experience using CI/CD tools (GitHub Actions, Jenkins, AWS CodeDeploy, Azure Pipelines) to streamline development and deployment workflows.

  • Hands-on experience deploying software on leading cloud platforms and utilizing AI tools like AWS Bedrock and Azure AI Services.

  • Experience leveraging evaluation frameworks (e.g., RAGAS, OpenAI Eval) and tools (e.g., DeepEval, LangSmith, Braintrust) to assess business and performance metrics of AI solutions.

  • Understanding of performance optimization, including the use of observability platforms, event tracking, and performance validation.

  • Practical knowledge of deploying AI solutions using cloud platforms like AWS, Azure, or GCP, utilizing services such as AWS Bedrock or Azure AI Services.

  • Excellent skills in prompt and context engineering, ensuring the usage of the right techniques to meet diverse project requirements.

  • Ability to communicate complex AI solutions and concepts effectively to technical and non-technical stakeholders.

Key Skills

Ranked by relevance