Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Role Overview
We are seeking a Senior DevOps Engineer with strong hands-on expertise in Kubernetes-based cloud deployments, Infrastructure as Code, and CI/CD automation. You will be responsible for building and managing scalable, secure, and automated infrastructure for modern data and platform engineering teams.
Key Responsibilities
- Design, deploy, and manage Kubernetes clusters (EKS / GKE / AKS) using Helm and IaC frameworks like Terraform.
- Implement CI/CD pipelines using GitHub Actions, GitLab CI/CD, Jenkins, or ArgoCD (GitOps model preferred).
- Manage Docker containers, container registries, and artifact repositories (Nexus / Artifactory).
- Automate infrastructure workflows using Bash and/or Python scripting.
- Work with distributed systems technologies like Airflow, Spark, Kafka.
- Monitor infrastructure with tools like Prometheus, Grafana, ELK, Datadog, or CloudWatch.
- Set up alerting, logging, and distributed tracing in data-driven environments.
- Manage secrets securely via Vault, AWS Secrets Manager, or Kubernetes Secrets.
- Advantage: Experience with data lakehouse architectures (e.g., S3 + Hive, Delta Lake, Iceberg).
Required Skills
- Strong hands-on experience with Kubernetes (EKS/GKE/AKS) and Helm
- Advanced knowledge of Terraform, Bash and/or Python scripting
- Practical CI/CD pipeline expertise (GitOps model preferred)
- Experience with Docker, container registries, and artifact management
- Exposure to Airflow, Spark, Kafka, and distributed compute systems
- Monitoring & observability with Prometheus, Grafana, ELK, or Datadog
- ✅ Secure Infrastructure practices (Vault, AWS Secrets, Kubernetes Secrets)
Key Skills
Ranked by relevanceReady to apply?
Join Gazelle Global and take your career to the next level!
Application takes less than 5 minutes