Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Our client is a young tech startup building a next-generation digital real estate data platform that helps users make data-driven decisions before buying, selling, or developing property. If you’re looking for a role where your work has a real impact and you can shape a product that changes how people interact with real estate data, we’d love to meet you.
Senior Data Engineer
Job description:
- Design, build, and maintain end-to-end data pipelines and the data platform.
- Lead technical decisions around data architecture, tools, and pipelines, guiding best practices across the team.
- Develop and maintain orchestration workflows to automate data movement, scheduling, and monitoring.
- Ensure data quality, reliability, and lineage across all datasets.
- Optimize ETL/ELT pipelines, making data accessible and ready to use.
- Continuously improve infrastructure, implementing automation, testing, and monitoring for scalable operations.
- Collaborate with data scientists, analytics, product management, and engineering teams to develop new products and features.
- Mentor and guide junior team members.
Requirements:
- Python proficiency for data engineering use cases, including API integrations, data transformations, and workflow automation (e.g., Pandas, PySpark).
- Experience with containerization and/or orchestration tools (Docker, Kubernetes) for deploying scalable data pipelines.
- Experience with data pipeline orchestration (e.g., Apache Airflow, Dagster).
- Hands-on experience with cloud environments, working in at least one major provider (AWS, Azure, or GCP).
- Experience with modern data platforms (Snowflake or Databricks).
- Proven ability to design, build, and maintain production-grade ETL/ELT pipelines that are scalable, reliable, and performant.
- Understanding of data governance, data quality frameworks and documentation best practices.
- Familiarity with CI/CD pipelines for data workflows.
- Solid grasp of the software development lifecycle (SDLC), including code reviews, testing, deployment, and monitoring.
Company offers:
- Flexible working hours – we trust you to manage your time.
- Learning & development budget – we invest in your growth.
- A fast-moving startup environment that encourages experimentation, learning, and collaboration.
- Remote or hybrid working model (if you’re based in Klaipėda, you can work from the office or choose a hybrid setup. If you’re located in another city, you can work remotely with occasional agreed-upon visits to the Klaipėda office).
Salary: €3,000 - €5,000 netto (€4959 – €8264 gross).
Key Skills
Ranked by relevanceReady to apply?
Join Alliance for Recruitment and take your career to the next level!
Application takes less than 5 minutes

