Andromeda
Senior Machine Learning Engineer (Applied Research)
AndromedaAustralia3 days ago
Full-timeEngineering
Andromeda builds personalised robot companions. We are pioneering the future of human-robot interaction where no playbook exists. We are backed by investors including San Francisco-based Forerunner Ventures, Rethink Impact and Main Sequence Ventures. We're working towards robots that feel genuinely alive, think Disney-level emotional connection. We have real customers, real end-users and have the runway to investigate and demonstrate technical breakthroughs.

As a Senior Machine Learning (ML) Engineer in Applied Research, you'll work at the intersection of machine learning, human-robot interaction, animation, and health. You'll be a founding member of the ML team with a primary focus on applied research - we have dedicated teams for robotics engineering, software development, and developer velocity to support the production side. You'll have significant autonomy to shape your focus areas. You'll help build our ML applied research culture, practices, and infrastructure.

We're looking for engineers who thrive in ambiguous, high-impact environments where your decisions shape the company's technical direction. You'll own a large part of the applied research roadmap. Andromeda's ML research agenda spans real-time conversational AI, biometric user recognition, emotional and social awareness, and expressive gesture generation. You'll drive a specific research stream while collaborating with future team members who'll tackle the others.

We follow a rapid iteration approach: leverage open source foundations, extend with novel research, and build proprietary breakthroughs.

Requirements

Responsibilities

  • Define and execute 6-12 month roadmaps for your applied research streams that contribute to product improvements
  • Build experimentation pipelines to rapidly prototype and test new ideas to characterise their feasibility
  • Collect and curate large datasets ethically and responsibly, and set up data pipelines and baselines
  • Train, debug and test machine learning models
  • Design evaluation frameworks appropriate for experiential outcomes - combining quantitative metrics where possible with qualitative user studies, expert assessments, and iterative product integration
  • Collaborate with product and engineering teams to understand requirements and integration pathways for your research
  • Create test and simulation tooling

About You

  • You're excited about the potential of foundation models and understand how to work with large datasets to create meaningful improvements in human-robot interaction
  • You're a builder who likes to get your hands dirty
  • You excel at applied research, enjoy translating product objectives into real-world outcomes, and collaborate effectively across the ML lifecycle
  • You are practised in designing and executing evaluations against baselines
  • You are inquisitive. You can go deep into datasets, model internals, training methodologies, and debugging. You care about how your users are using your work
  • Your idea of fun is to read and implement papers, experiment with open source implementations, and ask "what if?"
  • You want to work in and contribute to our culture of knowledge sharing. We encourage conference submissions where intellectual property considerations allow

Requirements

  • 4+ years of machine learning experience in research or production environments
  • Experience working on challenging machine learning problems that solve complex, practical challenges
  • Comprehensive knowledge in machine learning frameworks and libraries
  • Bachelor's, Master's, or Ph.D. in Computer Science, Robotics, or equivalent experience
  • Melbourne-based role requiring five days on-site

Preferred Experience

As an emerging field, we value any combination of the following skills:

  • Experience in applied research in an early stage product company or R&D division of a large company
  • Experience with large datasets in conversational AI, computer vision, or robotics domains
  • Multimodal systems combining vision, speech, and text for conversational applications
  • Gesture generation, motion synthesis, or character animation using machine learning
  • Computer vision for human and scene understanding (emotion recognition, spatial reasoning, environmental awareness)
  • Experience training and fine-tuning models (LLMs, vision-language models, or multimodal models preferred) for specialised applications
  • Embodied AI at the intersection of ML and control systems

Key Skills

Ranked by relevance