Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Senior Mobile Engineer – iOS/Android, Realtime AI (to build apps for Smart Glasses, Meta RayBan) - EU
About the role
We’re hiring a senior engineer to build production mobile apps and services that integrate with Meta’s Glasses SDK for RayBan Meta smart glasses. You’ll design and ship handsfree, voicefirst, visionenabled experiences that span the glasses, a companion iOS/Android app, and cloud AI. This is a highly crossfunctional role at the intersection of mobile, realtime audio/video, Bluetooth/WiFi transport, and multimodal AI.
What you’ll do
- Own endtoend development of a companion mobile app that interfaces with RayBan Meta smart glasses via Meta’s Glasses SDK.
- Implement reliable capture and streaming pipelines for camera preview frames, stills, and multichannel audio, with strict attention to latency, battery, and privacy indicators (camera LED, permissions).
- Build voicefirst UX: wakeword handoff, pushtotalk flows, VAD/ASR/TTS, earcons, and lowlatency audio playback on openear speakers.
- Integrate onphone computer vision and speech models (Core ML, MediaPipe, ONNX Runtime, NNAPI) and orchestrate cloud inference for multimodal LLMs (e.g., Llama 3family vision/voice) via streaming APIs.
- Handle transport and connectivity: Bluetooth LE control channels, WiFi/WiFi Direct media streaming, reconnect logic, and state machines for device pairing and session lifecycles.
- Design resilient, observable pipelines with backpressure, retries, offline fallbacks, and graceful degradation when thermals, bandwidth, or permissions change.
- Collaborate with product/design on voicefirst interaction patterns; run user tests; instrument metrics for latency, accuracy, and task completion.
- Establish mobile CI/CD, automated testing (unit/integration/BT deviceintheloop), crash/error analytics, and release processes.
- Champion privacybydesign and compliance with platform and bystandersafety policies.
Minimum qualifications
- 5+ years professional mobile engineering experience, shipping native apps at scale on iOS or Android (preferably both).
- Deep proficiency in:
- iOS: Swift, SwiftUI/UIKit, AVFoundation, CoreBluetooth, CoreMotion, background modes, concurrency (GCD/AsyncAwait), audio units.
- Android: Kotlin, Jetpack Compose/Views, Bluetooth/BLE, Camera/Media, Foreground services, Coroutines/Flow.
- Handson with realtime media pipelines and streaming: audio capture/playback, echo cancellation, noise suppression, lipsync/latency budgeting, WebSockets/gRPC/WebRTC, codecs (Opus/AAC/PCM).
- Solid networking and systems skills: state machines, threading, buffering, backpressure, power/thermal profiling, and debugging on constrained devices.
- Experience integrating cloud AI services (LLM/ASR/TTS) and handling streaming inference results in the UI.
- Strong product sense for voicefirst UX and accessibility in eyesup, handsfree contexts.
- Excellent communication; comfortable working with early/preview SDKs and ambiguous requirements.
Preferred qualifications
- Prior work with Meta’s Glasses SDK (RayBan Meta) or similar wearables (Apple Watch/visionOS audio, Snap Spectacles, Bose Frames) – if you don’t have this experience, it is fine provided that you are willing to learn Meta Glasses SDK quickly.
- Ondevice ML on phone: Core ML/Metal/Accelerate (iOS), MediaPipe/TFLite/NNAPI/GPU delegates (Android), ONNX Runtime Mobile.
- Multimodal AI integration: experience with Llamafamily models, Whisper/Seamless/other ASR, TTS providers, prompt and latency optimization, partial results/streaming UX.
- BLE expertise: GATT design, connection strategies, MTU/throughput tuning, coexistence with WiFi transport, device provisioning and firmware update flows.
- WebRTC for lowlatency A/V; adaptive bitrate, jitter buffers, AEC/VAD tuning.
- Backend exposure sufficient to move fast with AI:
- Python or Node.js for inference gateways (FastAPI/Express), WebSocket servers, request fanout, token streaming.
- Deploying model servers (Triton, vLLM) and vector/RAG stacks (FAISS/Pinecone), observability (OpenTelemetry), and autoscaling on AWS/GCP/Azure.
- Security and privacy: keychain/keystore, secure BLE pairing, PII handling, consent UX, regional data controls.
- QA for hardwareintheloop: writing automated tests that exercise glasses events (connect, capture, LED state, battery, IMU), and performance tests for endtoend latency.
Start date: In 2 weeks up to 1 month
Remote vs Onsite: Remote
US Hours overlap needed?: core hours from 2-6pm CET to have sufficient overlap with US time-zone
Key Skills
Ranked by relevanceReady to apply?
Join Cavendish Professionals and take your career to the next level!
Application takes less than 5 minutes