Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Which challenges you can expect
- Working in a distributed DevOps team across multiple locations using agile practices
- Supporting the design and operation of core infrastructure on GCP for data science and Python-based ML/AI workloads
- Maintaining and improving Kubernetes environments with Helm and ArgoCD
- Developing and reviewing Terraform modules and cloud configurations
- Managing GitLab CI/CD pipelines across applications, infrastructure, and ML projects
- Implementing and optimizing autoscaling strategies for workloads and clusters
- Ensuring robust observability across monitoring, logging, and tracing using Grafana, Prometheus, Loki, and Tempo
- Troubleshooting issues across cloud, container, networking, and application layers
- Enhancing platform reliability, cost efficiency, and developer experience
- Educating developers and data scientists on best practices and relevant parts of the technology stack
- Fluent English; German is a plus
- Minimum 4 years of professional experience in DevOps or related roles
- Bachelor’s degree in Computer Science, Information Technology, or a related field
- Strong expertise with at least one major cloud provider (GCP preferred; AWS/Azure acceptable)
- Extensive experience with Docker, Kubernetes, Helm charts, and container-based production systems
- Understanding basic security concepts and helping to identify and mitigate common vulnerabilities
- Solid hands-on proficiency with Terraform and GitOps tools such as ArgoCD
- Strong experience building and maintaining CI/CD pipelines, ideally with GitLab CI
- Proficiency in Unix/Linux administration, shell scripting, and practical networking fundamentals
- Experience with monitoring and observability solutions (Grafana stack; ELK also relevant)
- Familiarity with autoscaling concepts (HPA/VPA, cluster autoscaler)
- Experience with relational and document databases, such as PostgreSQL and MongoDB, is a plus
- Experience with OpenTelemetry instrumentation is a plus
DTSE-RO will not tolerate discrimination or harassment based on any of these characteristics.
By applying for this job you accept the DT privacy statement:
To process your online application we collect, process and use your personal data. We will treat your data as strictly confidential in accordance statutory provisions.
By submitting your application, you consent to your data being processed electronically, including by third parties. Data is only passed on to HR service providers that have been carefully selected by Deutsche Telekom AG.
For detailed information read the local data protection when applying for a job position at Deutsche Telekom Group.
Key Skills
Ranked by relevanceReady to apply?
Join Deutsche Telekom Services Europe Romania (DTSE Romania) and take your career to the next level!
Application takes less than 5 minutes

