Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
What You'll Be Doing
- Design, build, and maintain our MLOps infrastructure, establishing best practices for CI/CD for machine learning, including model testing, versioning, and deployment
- Develop and manage scalable and automated pipelines for training, evaluating, and deploying machine learning models, with a specific focus on LLM-based systems
- Implement robust monitoring and logging for models in production to track performance, drift, and data quality, ensuring system reliability and uptime
- Collaborate with Data Scientists to containerize and productionize models and algorithms, including those involving RAG and Graph RAG approaches
- Manage and optimize our cloud infrastructure for ML workloads on platforms like Amazon Bedrock or similar, focusing on performance, cost-effectiveness, and scalability
- Automate the provisioning of ML infrastructure using Infrastructure as Code (IaC) principles and tools
- Work closely with product and engineering teams to integrate ML models into our production environment and ensure seamless operation within the broader product architecture
- Own the operational aspects of the AI lifecycle, from model deployment and A/B testing to incident response and continuous improvement of production systems
- Contribute to our AI strategy and roadmap by providing expertise on the operational feasibility and scalability of proposed AI features
- Collaborate closely with Principal Data Scientists and Principal Engineers to ensure that the MLOps framework supports the full scope of AI workflows and model interaction layers
We've moved past experimentation. We have live AI features and a strong pipeline of customers excited to get access to more improved AI-powered workflows. Our focus is on delivering real, valuable AI-powered features to customers and doing it responsibly. You'll be part of a team that owns the entire lifecycle of these systems, and your role is critical to ensuring they are not just innovative, but also stable, scalable, and performant in the hands of our users.
Requirements
What we're looking for in you
- You are a practical and automation-driven engineer. You think in terms of reliability, scalability, and efficiency
- You have hands-on experience building and managing CI/CD pipelines for machine learning
- You're comfortable writing production-quality code, reviewing PR's, and are dedicated to delivering a reliable and observable production environment
- You are passionate about MLOps and have a proven track record of implementing MLOps best practices in a production setting
- You're curious about the unique operational challenges of LLMs and want to build robust systems to support them
- Experience with model lifecycle management and experiment tracking
- Ability to reason about and implement infrastructure for complex AI systems, including those leveraging vector stores and graph databases
- Proven ability to ensure the performance and reliability of systems over time
- 3+ years of experience in an MLOps, DevOps, or Software Engineering role with a focus on machine learning infrastructure
- Proficiency in Python, with experience in building and maintaining infrastructure and automation, not just analyses
- Experience working in Java or TypeScript environments is beneficial
- Deep experience with at least one major cloud provider (AWS, GCP, Azure) and their ML services (e.g., SageMaker, Vertex AI). Experience with Amazon Bedrock is a significant plus
- Strong familiarity with containerization (Docker) and orchestration (Kubernetes)
- Experience with Infrastructure as Code (e.g., Terraform, CloudFormation)
- Experience in deploying and managing LLM-powered features in production environments
- Bonus: experience with monitoring tools (e.g., Prometheus, Grafana), agent orchestration, or legaltech domain knowledge
Working for Opus 2
Opus 2 is a global leader in legal software and services, trusted partner of the world's leading legal teams. All our achievements are underpinned by our unique culture where our people are our most valuable asset. Working at Opus 2, you'll receive:
- Contributory pension plan
- 26 days annual holidays, hybrid working, and length of service entitlement
- Health Insurance
- Loyalty Share Scheme
- Enhanced Maternity and Paternity
- Employee Assistance Programme
- Electric Vehicle Salary Sacrifice
- Cycle to Work Scheme
- Calm and Mindfulness sessions
- A day of leave to volunteer for charity or dependent cover
- Accessible and modern office space and regular company social events
Key Skills
Ranked by relevanceReady to apply?
Join Opus 2 and take your career to the next level!
Application takes less than 5 minutes