Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
- You develop, implement, and maintain security policies and controls for Generative AI solutions, ensuring secure deployment of internal and client-facing AI agents and platforms.
- You assess the risks related to AI-generated content, model hallucination, prompt injection, data leakage, and adversarial inputs, and define appropriate mitigation strategies.
- You oversee and monitor the secure usage of foundation models, APIs (e.g., OpenAI, Azure OpenAI), and locally deployed LLMs in alignment with corporate cybersecurity and data governance policies.
- You collaborate with DevOps and AI engineers to implement security gates in AI model training, deployment pipelines, and runtime environments.
- You ensure secure integration of GenAI platforms (e.g., AI copilots, chatbots, agentic systems) into enterprise systems and protect access to sensitive or regulated data.
- You perform regular threat modelling and security assessments of AI-based architectures and ensure that AI security requirements are addressed early in the design process.
- You contribute to establishing an internal AI usage governance framework including role-based access control, data classification enforcement, and ethical use policies.
- You monitor evolving regulatory landscapes (e.g., EU AI Act, GDPR in AI use cases) and advise on necessary compliance actions.
- You act as the subject matter expert for GenAI-related security incidents, supporting detection, response, and forensics.
- You raise awareness among employees on secure usage of GenAI technologies through internal trainings and guidelines.
- You hold a university degree in IT, Cybersecurity, or a related field, or have equivalent professional qualifications.
- You have at least 2 years of experience working in cybersecurity, preferably with exposure to AI and ML systems or advanced data analytics environments.
- You possess strong knowledge of AI/ML security concepts, including model threats, data poisoning, and LLM misuse scenarios.
- You are familiar with AI development and deployment workflows (e.g., LangChain, RAG architectures, MLOps pipelines).
- You have experience with security tools and frameworks for cloud-native and GenAI environments (e.g., AWS Bedrock Security, Azure AI Security, Guardrails, prompt filters).
- You are proactive in staying updated on emerging AI security threats, open research, and policy developments.
- You are comfortable working closely with developers, architects, data scientists, and legal/compliance teams.
- You have excellent written and verbal communication skills in English; knowledge of French or German is considered an asset.
- Certifications in cybersecurity (e.g., CISSP, CISM), cloud security (e.g., CCSP, AWS Security Specialty), or AI ethics/security (e.g., MIT AI Ethics MicroMasters, NIST AI RMF) are a strong asset.
Key Skills
Ranked by relevanceReady to apply?
Join Enovos Luxembourg and take your career to the next level!
Application takes less than 5 minutes

