Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Role Summary
You will lead the security architecture, governance, and assurance of our AI and GenAI platforms across the organization. This includes classical ML systems and modern LLM/agentic architectures. You will define how models are designed, deployed, monitored, and defended; ensure they are robust, explainable, privacy-preserving, and compliant; and act as the go-to expert for AI security across products, data, cloud, and security teams.
Key Responsibilities
•AI Security Architecture
oDesign and review secure architectures for ML/LLM workloads (training, fine-tuning, inference, RAG, agents, plugins, tool calling, APIs).
oDefine reference architectures for on-prem, hybrid, and cloud AI platforms (Azure OpenAI, AWS Bedrock, GCP Vertex, self-hosted models, etc.).
•Threat Modeling & Risk Management
oPerform AI-specific threat modeling (e.g., data poisoning, model theft, prompt injection, jailbreaks, supply-chain, inference attacks) using MAESTRO or similar framework.
oAlign controls with leading frameworks: NIST AI RMF, ISO/IEC 27001, ISO/IEC 27090, ISO/IEC 42001, OWASP GenAI / LLM Top 10, CSA & MITRE ATLAS.
•Security Control Design & Implementation
oDefine and oversee implementation of controls for:
▪Training & data pipelines (data quality, provenance, labeling, PII protection).
▪Model & artifact integrity (signing, SBOM, secure registries).
▪Access control, isolation, rate limiting, and abuse detection.
▪Secure prompt engineering and guardrail policies.
▪Security prompt monitoring to detect ongoing attacks.
▪Data security: data classification, data masking and DLP.
▪MCP server hardening.
•Reliability & Trustworthiness
oPartner with engineering and data science to embed robustness, observability, fallback strategies, and evaluation pipelines (safety, bias, toxicity, hallucination monitoring).
oContribute to SLOs/SLAs for AI systems, including security and reliability KPIs.
•Secure SDLC for AI
ombed AI security into CI/CD: scanning, dependency checks, policy-as-code, red-teaming AI components pre- and post-release.
•Incident Response & Red Teaming
oLLM bias detection.
oPerform red teaming activities to abuse and force AI hallucinations.
oDevelop and maintain AI-specific playbooks (prompt abuse, model exfiltration, data leakage, compromised agents).
oLead or support AI red/blue/purple teaming exercises using frameworks like MITRE ATLAS.
•Governance, Compliance & Policy
oAdvice on alignment with emerging AI regulations and standards (e.g., EU AI Act, regional laws, internal AI use policies).
oDefine internal policies on responsible AI, data usage, model lifecycle, and 3rd-party AI risk management.
•Stakeholder Leadership
oRun workshops, training, and awareness for engineering, security, and business teams.
Required Qualifications & Experience
•8–12+ years in Cybersecurity, with 3–5+ years focused on AI/ML or data platforms (can be overlapping).
•Hands-on experience with:
oCloud platforms (Azure, AWS, GCP) and their AI services.
oAt least one Agentic/GenAI stack (e.g., Transformers, LangChain/LlamaIndex, vector DBs, model gateways, MLOps platforms).
•Proven track record designing or reviewing secure architectures for:
ML pipelines, LLM/RAG systems, or agentic/automation platforms.
•Strong understanding of:
oCryptography, identity & access management, network & app security.
oData protection & privacy (PII, PHI, DPIA concepts).
•Experience working with or mapping to frameworks/standards such as:
oNIST AI RMF, ISO/IEC 27001, ISO/IEC 42001, SOC 2, OWASP Top 10 & OWASP GenAI/LLM Top 10, MITRE ATT&CK/ATLAS, CSA guidance.
•Excellent communication skills: able to translate complex AI risks into clear business and technical requirements.
Core Technical & Domain Skills
•AI/ML & GenAI fundamentals:
oModel types (LLMs, encoders, diffusion, classical ML), training/inference flows.
oData pipelines, feature stores, embeddings, vector databases.
•AI Penetration testing:
oPrompt injection, model tampering, data poisoning, output manipulation, exfiltration, shadow AI, insecure plugins/integrations.
•Platform & Infra:
o Kubernetes, containers, API gateways, secrets management, zero trust.
•Observability:
o Logging, tracing, safety & quality metrics for AI systems.
o Health and latency monitoring.
•Strong scripting/automation (Python preferred) for security tooling and assessments.
Certifications & Training (Required / Highly Desirable)
Essential:
• One or more core security certifications:
o CISSP, CISM, CCSP, ISO 27001 Lead Implementer/Lead Auditor.
•One or more cloud security certifications:
oCCSK (Cloud Security Alliance), AWS Security Specialty, Azure Security Engineer, Google Professional Cloud Security Engineer.
•AI & AI Security–specific training / certs (or commitment to obtain within 6–12 months):
oNIST AI RMF or ISO/IEC 42001-focused training.
oCertified AI Security Professional (CAISP / similar offerings that cover LLM/GenAI threats, MITRE ATLAS, OWASP LLM Top 10).
oOffSec / similar LLM & AI Red Teaming or GenAI security courses.
oVendor AI/ML certifications (Azure AI Engineer, AWS Machine Learning Specialty, GCP ML Engineer) with demonstrated security emphasis.
Additional desirable:
•GIAC/GWAPT/GXPN/GCLD or similar offensive / cloud / Appsec certs.
•Formal training in:
oSecure MLOps & ML supply-chain security.
oPrivacy engineering & data protection (e.g., CIPT, CDPSE).
oOWASP GenAI / LLM Top 10, MITRE ATLAS, CSA MAESTRO & other AI risk frameworks.
Key Skills
Ranked by relevanceReady to apply?
Join malomatia and take your career to the next level!
Application takes less than 5 minutes

