Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
About the Role
We are looking for an experienced Senior Data Engineer to design, build, and optimize large‑scale data pipelines and robust data models across enterprise platforms. You will work with both structured and unstructured data, modernize data flows, strengthen governance and data quality, and support cloud‑driven initiatives for analytics, BI, and regulatory use cases.
Key Responsibilities
- Develop, maintain and optimize scalable ETL/ELT pipelines for structured and unstructured data for baking data domains.
- Build and manage data integrations across internal systems and third‑party platforms.
- Contribute to the development of optimized data models.
- Ensure data quality, governance and security standards are consistently met.
- Monitor and improve pipeline performance, reliability and cost efficiency.
- Maintain proper documentation of pipelines.
- Collaborate in cross‑functional teams to understand data needs and deliver robust solutions: Data Analysts, BI Developers, centralized Data Management team.
- Automate manual data processes to improve efficiency.
- Manage deployments using Airflow, GitHub.
Required Qualifications
- Strong experience with SQL/PLSQL, ODI and advanced query optimization.
- Solid background in Data Warehouse design and development and enterprise data architectures.
- Strong understanding of data modeling, distributed systems, and batch/stream processing.
- Hands‑on experience with modern pipeline/orchestration tools (Airflow, ADF, Glue, DBT, etc.).
- Experience working in a cloud environment (Azure, AWS, or GCP).
- Experience in banking, financial services, or regulatory reporting (preferred).
- Experience with Databricks/Snowflake (PySpark, Delta Lake,data warehousing, modeling, pipelines).
- Experience with event‑driven/streaming technologies (e.g., Kafka, Kinesis).
- Experience with CI/CD tooling and development automation.
Key Skills
Ranked by relevanceReady to apply?
Join Banca Transilvania and take your career to the next level!
Application takes less than 5 minutes

