Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Our mission is to develop forward-looking solutions—such as model protection, privacy-preserving ML, security for agentic AI, and anomaly detection—that will later be integrated into our Edge products.
This requires high-level innovation skills combined with a hands-on mindset.
About
If you are passionate about building secure AI systems, exploring new ideas, and turning concepts into prototypes, this role is for you:
Develop security tools and frameworks for Bring Your Own Model (BYOM) workflows and perform threat modeling for ML pipelines. Ensure proactive detection of vulnerabilities and compliance with emerging ML security standards.
Responsibilities
- Build security scanning tools for ML artifacts and deployment workflows.
- Design secure APIs for model integration on embedded platforms.
- Perform threat modeling for ML systems (poisoning, evasion, prompt injection).
- Implement monitoring solutions for model integrity and anomaly detection.
- Ensure compliance with NIST AI Risk Management Framework and similar standards.
- Collaborate with internal teams to integrate security checks into development pipelines.
- Have a background in Computer Science, Cybersecurity, or Cryptography and a strong interest in applied ML, OR
- Have a background in Machine Learning and an interest in cybersecurity.
- Strong Python development for automation and tooling.
- Experience with threat modeling methodologies adapted for ML systems.
- Knowledge of adversarial ML attacks and defenses.
- Familiarity with secure API design and integration.
- Understanding of compliance frameworks (NIST AI RMF, ISO/IEC AI security standards).
Key Skills
Ranked by relevanceReady to apply?
Join NXP Semiconductors and take your career to the next level!
Application takes less than 5 minutes

