Data Engineer - PySpark - Finland
This role focuses on developing and maintaining robust data pipelines to support critical telecommunications systems. You will ensure reliable data processing and integrity, enabling seamless connectivity and insights for business operations. Your work will directly contribute to optimising data-driven solutions in a fast-paced industry.
Key Responsibilities
- Build and optimise ETL processes using PySpark and Apache Airflow to support efficient data workflows.
- Troubleshoot and resolve issues in real-time and batch data pipelines to maintain system reliability.
- Collaborate with cross-functional teams to address data pipeline challenges and meet project requirements.
- Uphold data quality and governance by applying best practices in data management.
- Maintain and troubleshoot CI/CD pipelines using GitHub Actions and JFrog to ensure smooth deployments.
Essential Qualifications
- Experience in data engineering or a related field.
- Proficiency in PySpark, Apache Airflow, Unix, Docker, and Kubernetes.
- Knowledge of CI/CD tools, including GitHub Actions and JFrog.
- Equivalent experience or a degree in computer science, engineering, or a related discipline.
Desirable Qualifications
- Familiarity with cloud platforms (e.g., AWS, GCP, Azure).
- Experience with data governance frameworks in telecommunications.
Key Skills
Ranked by relevance
Related Jobs
3 roles aligned with this opportunity
Machine Learning Engineer
2026-04-11
Back End Developer
2026-04-12
Dev Ops Engineer
2026-04-12
- Posted
- Jun 12, 2025
- Type
- Contract
- Level
- Not Applicable
- Location
- Finland
- Company
- Empiric
Industries
Categories
Related Jobs
3 roles aligned with this opportunity
Machine Learning Engineer
2026-04-11
Back End Developer
2026-04-12
Dev Ops Engineer
2026-04-12