Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Machine Learning Engineer
As an ML Engineer at Pluralis, you'll implement and optimize low-bandwidth model-parallel training systems that enable truly distributed language model development. Your work will directly contribute to creating a more open AI ecosystem where anyone can participate in frontier model development, not just large corporations with massive compute resources.
Key Responsibilities
- Distributed Training Implementation: Build and optimize systems for training large models across heterogeneous hardware connected by low-bandwidth networks.
- Performance Optimization: Implement techniques to reduce communication overhead while maintaining model convergence in challenging network environments.
- Training Infrastructure: Design and develop robust training pipelines that can recover from node failures and network disruptions.
- Model Serving: Create efficient systems for deploying sharded models in a protocol-locked environment.
- Metrics & Monitoring: Develop tools to track training progress, evaluate model quality, and identify bottlenecks in distributed environments.
- Technical Excellence: Master's degree in Computer Science or related field, or equivalent experience. Several years of hands-on ML engineering experience.
- ML Systems Knowledge: Strong understanding of model parallelism techniques, distributed training architectures, and optimization methods.
- Programming Proficiency: Expert-level skills in PyTorch or similar frameworks, with experience scaling models across multiple devices.
- Systems Understanding: Familiarity with networking concepts, distributed computing principles, and performance optimization.
- Bonus: Experience with large language models, high-performance computing, or network-constrained environments.
- Equity-Heavy Package: We offer meaningful ownership or token allocations for key technical contributors.
- Competitive Base: Pluralis is hiring the best.
- Visa Sponsorship: Optional full visa sponsorship and relocation support to either US or Australia.
- Remote-First Culture: Flexible work environment with team members distributed globally. We have two hubs; New York and Melbourne with optional hybrid work if desired.
- Cutting-Edge Domain: Work at the intersection of AI and decentralised systems, tackling some of the most challenging engineering problems in what is about to be one of the largest intersections of two previously non-overlapping fields ever.
Key Skills
Ranked by relevanceReady to apply?
Join Pluralis Research and take your career to the next level!
Application takes less than 5 minutes