Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Company Details
FlexAI is a Paris-based AI infrastructure company revolutionizing how artificial intelligence workloads are built, deployed, and scaled. Founded by industry veterans from Apple, NVIDIA, Intel, and Tesla, FlexAI aims to eliminate the infrastructure challenges faced by AI teams, allowing them to focus on innovation instead of managing compute resources.
The company provides a Workload-as-a-Service (WaaS) platform that enables seamless deployment and scaling of AI workloads across any cloud, any architecture, and any hardware. By abstracting away the complexity of infrastructure, FlexAI ensures faster job launches, higher GPU utilization (90%+), and significant cost savings compared to traditional cloud setups.
With its heterogeneous compute orchestration layer, FlexAI empowers teams to train, fine-tune, and run models without worrying about compatibility or vendor lock-in. The platform delivers enterprise-grade performance with the flexibility to move workloads between providers and hardware effortlessly.
Backed by $30 million in seed funding from leading investors like Alpha Intelligence Capital, Elaia Partners, and Heartcore Capital, FlexAI is rapidly building the foundation for the next generation of universal AI compute.
In a world where AI innovation often outpaces infrastructure capability, FlexAI bridges the gap — offering a unified, scalable, and efficient way to power the future of artificial intelligence.
Job Roles & Responsibilities
- Manage and optimize AI/ML workloads using TensorFlow and PyTorch.
- Ensure efficient deployment on FlexAI's Workload-as-a-Service platform.
- Oversee runtime performance to achieve high GPU utilization.
- Collaborate with teams to resolve infrastructure challenges.
- Monitor and maintain AI workload scalability across architectures.
- Adapt AI models to different hardware environments seamlessly.
- Implement enterprise-grade performance solutions for AI applications.
- Facilitate workflow transitions between cloud providers with ease.
- Identify and mitigate risks related to AI infrastructure management.
- Provide technical expertise in AI workload optimization and deployment strategies.
Ideal candidate profile
- Embrace innovative collaboration using TensorFlow solutions.
- Foster a growth mindset while navigating challenges with PyTorch.
- Promote clarity and efficiency in sharing complex technical ideas.
- Cultivate curiosity to explore diverse approaches and methods.
Key Skills
Ranked by relevanceReady to apply?
Join Flexiple and take your career to the next level!
Application takes less than 5 minutes

