Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
The Role
Detecting attackers in real-time requires a high-performance technology stack that enables machine learning and statistical techniques. This requires the management of considerable volumes of data. We are looking for an innovative and resourceful Software Engineer to join our growing team. Our data science and analytical capabilities rely on fast and efficient data flow. Working with our existing Data Engineering and Data Science teams, you will assist in optimising these flows as well as building out new data pipelines to support our growing product portfolio
.
At a Glance
A Senior Software Engineer in this capacity will create, test and maintain large scale distributed systems and tools.
- Design, build, and operate large-scale, distributed ingestion services that collect and process terabytes of data daily across multiple regions
- Develop and maintain a scalable offering for cloud and SaaS connectors, integrating with platforms such as Azure control plane, Entra ID, Microsoft 365, AWS, GCP, and OCI
- Architect and evolve event-driven systems using Python, Kubernetes, serverless technologies (e.g., Lambda), and infrastructure as code (Terraform)
- Ensure high availability, fault tolerance, and horizontal scalability of ingestion pipelines in multi-region cloud environments
- Optimize throughput, reliability, and cost efficiency of data ingestion workloads at scale
- Define and enforce best practices around observability, resiliency, and operational excellence for mission-critical services
- Troubleshoot complex, distributed-system issues in production and drive root-cause analysis through to durable solutions
- Collaborate with Product, Security Research, Data Science, and Platform teams to enable secure, real-time cloud visibility for customers
- Contribute to architectural decisions and mentor other engineers in building robust, scalable distributed systems
- What Will Impress Us
- 6+ years in software development or equivalent experience
- Strong communication & collaboration skills
- Solid programming knowledge in Python
- Experience with infrastructure as code (Terraform), automated testing, and CI/CD
- Experience with log/metadata ingestion from cloud providers
- Experience building and deploying distributed services to any cloud provider (e.g. AWS, Azure, GCP)
- Production experience with various distributed processing technologies (e.g. Kubernetes, Kafka, NATS, etc.)
- Knowledge of tools like Git/Jira
- Get things done, learn new things, take initiative and challenge existing assumptions and conventions
- Knowledge of software design principles and leading software development practices
- B.S or M.S or Ph.D. in Computer Science (or equivalent experience)
Key Skills
Ranked by relevanceReady to apply?
Join Vectra AI and take your career to the next level!
Application takes less than 5 minutes

