Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Machine Learning Engineer
On-site | Abu Dhabi
AppliedAI is a pioneering AI technology company headquartered in Abu Dhabi, committed to innovation and excellence in artificial intelligence solutions in regulated industries such as healthcare, insurance, government, and financial services.
Opus is the world's first Knowledge Work AI platform. Built by AppliedAI to pioneer Supervised Automation, a human-in-the-loop model where AI handles repetitive, structured tasks while human experts provide crucial oversight at defined intervals.
The platform uses its proprietary Large Work Model to generate and orchestrate outcome-based workflows, enabling a dramatic reduction in the cost of knowledge work and allowing human talent to focus on high-value, creative, and judgement-intensive activities.
Role Overview:
As an Opus ML Engineer, you will contribute to the development, optimization, and reliability of Opus’s AI infrastructure.
You’ll work in a cross-functional UAE-based team to build, maintain, and improve the systems that power Opus’ AI features, from inference pipelines and workflow orchestration to prompt alignment and agentic integrations.
Your focus will be on implementing clean, efficient, and observable systems that ensure Opus’s AI features deliver consistent, high-quality performance for users.
Key Responsibilities:
- Develop and maintain AI inference and orchestration pipelines used in production.
- Optimize model-serving performance, latency, and reliability across live systems.
- Implement observability, logging, and monitoring for AI components.
- Contribute to prompt and alignment improvements to enhance response quality and consistency.
- Refactor existing code for modularity, clarity, and maintainability.
- Collaborate with senior engineers and product teams to deliver stable, user-facing AI features.
- Participate in design reviews, testing, and rollout of new systems and improvements.
Qualifications:
- 2 - 4 years of experience in ML or backend engineering.
- 1+ years working on LLM prompt design, model behavior shaping, or symbolic product-logic integration into language systems.
- Strong Python programming skills and understanding of production ML systems.
- Familiar with orchestration tools (Airflow, Celery, Kubernetes) and cloud environments (AWS, GCP, Azure).
- Experience deploying and monitoring inference pipelines (e.g. LLMs, CV models, or similar APIs).
- Knowledge of observability tools and performance debugging.
Why join AppliedAI:
- Opportunity to work with a highly innovative AI technology company.
- Collaborative and innovative work environment.
- Growing, entrepreneurial and forward-thinking culture.
- Career growth and professional development opportunities.
- Exposure to a thriving ecosystem working from our Abu Dhabi HQ.
Key Skills
Ranked by relevanceReady to apply?
Join AppliedAI and take your career to the next level!
Application takes less than 5 minutes

