Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
This is a great fit for someone who has worked with LiDAR, cameras, radar, IMUs, or multimodal pipelines, and who enjoys taking ML systems from prototype to field-tested reality.
🧑🏻💻 Key Responsibilities
Perception & Sensor Fusion
- Develop ML pipelines for multimodal sensor data (LiDAR, cameras, radar, IMU, etc.).
- Implement or support sensor fusion approaches (classical or ML-based).
- Build models and processing steps for perception tasks such as detection, tracking, mapping, or scene understanding.
- Work closely with robotics engineers, software teams, and simulation teams to ensure seamless integration of ML perception modules.
- Contribute to design discussions involving sensing hardware, data capture strategies, and operational requirements.
- Train, evaluate, and optimize ML models for robotics perception under real-world constraints.
- Deploy ML components to diverse environments (edge devices, robotics stacks, cloud backends).
- Collaborate on performance tuning, latency improvements, and reliability enhancements.
- Experience working with robotics or autonomous systems.
- Hands-on work with LiDAR, cameras, radar, or IMU pipelines.
- Strong Python and ML fundamentals, with experience in at least one major framework (PyTorch, TensorFlow).
- Experience designing or maintaining sensor-based ML systems, including data preparation and evaluation.
- Understanding of model deployment in real systems (edge devices, robotics stacks, embedded platforms, or cloud).
- Experience with sensor fusion frameworks, classical or ML-based.
- Familiarity with robotics middleware (ROS/ROS2), mapping, SLAM, or navigation stacks.
- Exposure to simulation tools (Isaac Sim, Gazebo, Unity, Webots).
- Experience improving performance of models under real-time constraints.
- Background working with safety, reliability, or high-availability systems.
Key Skills
Ranked by relevanceReady to apply?
Join Marvik and take your career to the next level!
Application takes less than 5 minutes

