Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Cast AI is the leading Application Performance Automation (APA) platform, enabling customers to cut cloud costs, improve performance, and boost productivity – automatically.
Built originally for Kubernetes, Cast AI goes beyond cost and observability by delivering real-time, autonomous optimization across any cloud environment. The platform continuously analyzes workloads, rightsizes resources, and rebalances clusters without manual intervention, ensuring applications run faster, more reliably, and more efficiently.
Headquartered in Miami, Florida, Cast AI has employees in more than 32 countries worldwide and supports some of the world’s most innovative teams running their applications on all major cloud, hybrid, and on-premises environments. Over 2,100 companies already rely on Cast - from BMW and Akamai to Hugging Face and NielsenIQ.
What’s next? Backed by our $108M Series C, we’re doubling down on making APA the new standard for DevOps and MLOps, and everything in between.
About the role
In the AI Enabler team, our day is usually full of R&D challenges. Have you ever encountered a situation where you need to expand your AI infrastructure so that the applications can automatically pick the right large language models (LLMs) that are both more cost-efficient and better performing? Most of us probably do nowadays, or at least understand the complexity of making such decisions while keeping track of our cloud budget.
Responsibilities
One of the team's responsibilities is ensuring that whenever a customer makes AI-related decisions regarding their K8s infrastructure, they are implemented automatically without unnecessary costs or hassle. This is just one small piece of a bigger puzzle. To get into a more detailed perspective, ask yourself the following questions:
- How often do you use LLMs?
- What is the least expensive LLM you can pick for a given prompt without degrading the quality of the response?
- How much do your applications cost per 1 million tokens and how can you improve it?
- Which API keys have the biggest waste?
- How can you improve your frequently running prompt to use fewer tokens?
- What is fine-tuning and how to do it efficiently?
- What is a transformer?
Being a part of this team would involve design and decision-making end-to-end while collaborating with colleagues from other teams. Cast AI, being a technical product, encourages not only coding something as written in the JIRA ticket but also coming up with new features and potential solutions to customers' problems. Given that the team is working on a technical greenfield project, you will have the opportunity to impact it in many ways positively.
Here Are Some Of The Tools We Use Daily
- Python
- vLLM, SGLang, TensorRT, PyTorch
- ClickHouse and PostgreSQL for persistence
- GCP Pub/Sub for messaging
- gRPC for internal communication
- REST for public APIs
- Kubernetes, which our product is evolving around
- AWS, GCP, and Azure cloud providers, which are currently supported in our platform
- We use GitLab CI with ArgoCD as our GitOps CD engine
- Prometheus, Grafana, Loki, and Tempo for observability.
- 5+ years of hands-on experience in Data Science and Machine Learning, with a proven track record, demonstrated through a robust portfolio of projects.
- Strong software engineering skills in Python.
- Ability to move fast in an environment where things are sometimes loosely defined and may have competing priorities or deadlines.
- Expertise in ML inference optimizations, including techniques such as:
- Reducing initialization time and memory requirements;
- Utilizing reduced precision and weight quantization;
- Inference engine tuning (vLLM, SGLang, TensorRT).
- Knowledge of network optimization for distributed ML training and inference.
- Understanding of distributed training patterns and checkpointing strategies.
- You have to be physically in any of the European countries GMT 0 to GMT +3.
- Strong English skills.
- Strong verbal and written communication skills.
- Ability to work independently and collaborate in a group.
- Evaluate and Analyze LLM performance.
- Architect and build inference and training pipelines, directly contributing through hands-on design, model training pipeline, and deployment strategies.
- Stay up to date with industry trends.
- Competitive salary (€6,500 - €9,000 gross, depending on the level of experience)
- Enjoy a flexible, remote-first global environment.
- Collaborate with a global team of cloud experts and innovators, passionate about pushing the boundaries of Kubernetes technology
- Equity options.
- Private health insurance.
- Get quick feedback with a fast-paced workflow. Most feature projects are completed in 1 to 4 weeks.
- Spend 10% of your work time on personal projects or self-improvement.
- Learning budget for professional and personal development - including access to international conferences and courses that elevate your skills.
- Annual hackathon to spark new ideas and strengthen team bonds.
- Team-building budget and company events to connect with your colleagues.
- Equipment budget to ensure you have everything you need.
- Extra days off to help maintain a healthy work-life balance.
Key Skills
Ranked by relevanceReady to apply?
Join Cast AI and take your career to the next level!
Application takes less than 5 minutes