Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
The Company
Presight, an ADX-listed public company limited by shares whose majority shareholder is Abu Dhabi company G42, is the region’s leading big data analytics company powered by Artificial Intelligence (“AI”). It combines big data, analytics, and AI expertise to serve every sector, of every scale, to create business and positive societal impact. With its world-class computer vision, AI and omni-analytics platform as its engine, Presight excels at all-source data interpretation to support insight-driven decision making that shapes policy and creates safer, healthier, happier, and more sustainable societies.
The Opportunity
We are seeking a highly skilled LLM Ops Engineer to lead the deployment, scaling, monitoring, and optimization of large language models (LLMs) across diverse environments. This role is critical to ensuring our machine learning systems are production-ready, high-performing, and resilient. The ideal candidate will have deep expertise in Python programming, a comprehensive understanding of LLM internals, and hands-on experience with various agentic frameworks, inference engines and deployment strategies. This position offers the opportunity to work on cutting-edge AI technologies in a dynamic and collaborative environment.
Responsibilities
Responsibilities
- Design Design, deploy, and scale LLM infrastructure across cloud and on-premises environments, including GPU clusters, containers, and orchestration with Kubernetes, ensuring high performance, reliability, and fault tolerance.
- Build and optimize inference pipelines for low-latency, high-throughput model serving using frameworks such as Triton Inference Server, vLLM, or TensorRT.
- Manage CI/CD pipelines, AI microservices, embeddings storage, and MCP servers, ensuring secure, production-ready deployment of models and tool integrations.
- Deploy and maintain agentic AI frameworks (e.g., Dify, LangFlow) and LLM gateways to manage traffic, enforce audit/compliance controls, and integrate with IAM systems.
- Monitor performance, cost, and resource usage; implement optimization strategies for GPU, CPU, and storage efficiency while maintaining scalability and reliability.
- Conduct hardware sizing and capacity planning to meet current and projected LLM workload requirements.
- Collaborate with data scientists and engineers to operationalize models and workflows into production-grade systems.
- Develop and maintain documentation, runbooks, and deployment playbooks for knowledge sharing and operational consistency.
- Stay current on emerging LLM techniques, including quantization, distillation, distributed inference, and best practices for production deployments.
- Troubleshoot and resolve production issues, continuously improving infrastructure for stability, scalability, and maintainability.
Qualifications
- Bachelor’s or Master’s degree in computer science, machine learning, or related field, with 2+ years of experience in ML Ops, DevOps, or ML infrastructure, including production deployment of ML/LLM workloads.
- Strong Python and scripting skills, with experience in containerization, orchestration (Docker, Kubernetes, Helm), CI/CD pipelines, monitoring, and observability for ML systems.
- Expertise in GPU cluster management, distributed inference, high-performance model serving, and scalable, fault-tolerant architectures.
- Proficiency with cloud/hybrid environments (AWS, GCP, Azure, on-prem), and knowledge of security, access control, and compliance requirements.
- Experience deploying and maintaining agentic AI frameworks (e.g., Dify, LangFlow) and MCP servers for LLM-tool integration.
- Familiarity with LLM orchestration, RAG pipelines, API integrations, and distributed inference frameworks (e.g., Ray).
- Expertise in hardware sizing, capacity planning, cost optimization, and infrastructure-as-code tools (Terraform) for large-scale ML/LLM deployments.
- Hands-on experience with LLM optimization techniques (quantization, distillation, compression).
- Understanding of compliance and governance standards (ISO, NIST) for operational AI systems.
If you are a performance-driven, inquisitive mind with the agility to adapt to ambiguity, you will fit right in. You should be eager to explore opportunities to build meaningful collaborations with stakeholders and aspire to create unique customer-centric solutions. Bias for action and a passion to conquer new frontiers is at the heart of the Presight community.
Key Skills
Ranked by relevanceReady to apply?
Join Presight and take your career to the next level!
Application takes less than 5 minutes

