Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Emergn is a technology and management consultancy with a mission to set our clients free. No dependency. No short-term fix. Just teaching them the real cure to their problems, so they can stay ahead of their competitors now and forever.
A different kind of consultancy like this needs a different kind of team. One you might be just the right fit for.
This is a place for people who want to:
- Make a real difference. Here, you solve problems for real. No endless engagements or running up the clock. You get to see how change really works for some of the world's greatest businesses.
- Never stop growing. We turn our expertise back on ourselves. Here, you’re always learning, stretching your thinking at the cutting edge of how organizations grow, which means staying at the cutting edge of your own growth too.
- Belong everywhere. We’re a global community of experts who work without borders. Friendly, supportive, and genuinely collaborative. Because when you’re trying to change an industry, you need people who have each other’s backs.
As a Senior Data Engineer at Emergn you will help us shape Emergn exciting future and play an important role in our growth.
We want you to:
- Full proficiency in Microsoft SQL Server and T-SQL: stored procedures, views, and user-defined functions (mandatory).
- Full proficiency in Power BI (mandatory).
- Experience in creating reports using SQL Server Reporting Services (SSRS). If the candidate lacks this skill but excels in the previous two, we can provide training on SSRS.
- Design, build, and maintain scalable data pipelines and workflows using Microsoft Fabric, including Data Factory, Lakehouse, and Synapse Pipelines.
- Use Apache Spark in Microsoft Fabric Notebooks for large-scale data processing, cleansing, and transformation tasks.
- Develop efficient SQL-based solutions for data modeling, data warehousing, and analytics layers.
- Leverage Python and PySpark to automate data flows, integrate sources, and apply advanced data logic.
- Collaborate with analysts, engineers, and stakeholders to deliver clean, trustworthy datasets to reporting and ML pipelines.
- Assist in establishing data quality, data lineage, and governance processes across the data stack.
- Act as a subject matter expert on data workflows within the Microsoft ecosystem, helping to guide best practices across teams.
This job might be for you if you have:
- Hands-on experience with Microsoft Fabric components including Spark Notebooks, Data Factory, and Synapse.
- Strong understanding of Apache Spark (especially via PySpark) for distributed data processing.
- Proficiency in SQL for data manipulation and optimization.
- Solid Python skills for scripting, automation, and transformation logic.
- Experience with cloud-native data solutions—preferably on Microsoft Azure.
- Understanding of data warehouse design, dimensional modeling, and Lakehouse patterns.
- Familiarity with CI/CD and version control tools (e.g., Git, Azure DevOps).
- Comfortable working in agile, iterative data development cycles.
- Excellent communication and stakeholder collaboration skills.
Nice to Have:
- Familiarity with OneLake architecture and Delta Lake implementation in Fabric.
- Knowledge of Power BI data modelling and how backend data impacts reports.
- Experience with streaming data ingestion (e.g., Azure Event Hubs, Kafka, Fabric Real-Time Analytics).
- Exposure to notebook-based development workflows in Jupyter or Databricks.
- Awareness of data privacy, security best practices, and compliance (e.g., GDPR, DLP tools).
- Previous experience with other Spark platforms like Databricks or HDInsight is a plus.
What we offer:
- Salary 4750 - 5000 EUR gross/month.
- Work within a dynamic international team of experts Excellent opportunity for personal and professional development .
- Flexible work model and the freedom to choose the tools that suit you best – Mac or Window.
- Ability to work with modern technologies.
- Extensive catalogue of educational programs, the possibility of training and certification at the expense of the company.
- 20 working days of vacation per year.
- Life and disability insurance.
- Health insurance.
- Big free parking.
- Birthday gift.
Work where you'll make an impact every day.
Join us today!
Key Skills
Ranked by relevanceReady to apply?
Join Emergn and take your career to the next level!
Application takes less than 5 minutes

