Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
- Bachelor's degree or equivalent practical experience.
- 2 years of experience with security assessments or security design reviews or threat modeling.
- 2 years of experience with security engineering, computer and network security, and security protocols.
- 2 years of coding experience in one or more general purpose languages.
- Experience in detection, investigations, and incident response.
- Experience writing production code in Python/Go.
- Experience in analyzing systems and identifying security and abuse problems, threat modeling, and remediation.
- Knowledge in security principles.
Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.
Cloud AI Protection (CAIP) mission is to enable the rapid growth of Google Cloud Platform (GCP) and workspace AI businesses by curbing associated safety and security risks. CAIP supports GCP and Workspace AI products throughout their life cycle by advancing safety protection mechanisms in the earliest stages of product design. Specifically, CAIP’s service portfolio includes both pre- and post- launch capabilities.
As an AI Security Engineer, you will ensure that our AI products are not only powerful but also safe, secure, and aligned with our AI principles. You will help ensure every AI product is as resilient as it can be by designing and building an industry leading AI agent system to protect Google AI from misuse. Your deep technical skills, understanding of potential security and safety risks, and passion for diving into abuser Tactics, Techniques, and Procedures (TTP) will help the teams to solve the classes of challenging problems in AI safety and misuse at Google scale.
Responsibilities
- Design and build implementation for anti-abuse detection and action systems, including detection and enforcement AI agents, to protect Google Cloud and workspace AI products at scale. Investigate leads and incidents to calibrate AI agents and improve their performance.
- Drive enterprise focused security improvements to Google products and services.
- Respond to AI abuse and misuse incidents, rapidly investigate, communicate, and take actions.Review and develop secure operational practices, and provide security guidance for Engineers and Analysts.
- Communicate with Product and Customer teams on incidents and threat assessment outcomes to identify solutions to mitigate classes of attacks.
- Collaborate with other Google teams to ensure that the issues are understood and solutions are adopted.
Key Skills
Ranked by relevanceReady to apply?
Join Google and take your career to the next level!
Application takes less than 5 minutes

