Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are looking for a versatile Software Engineer to join our Data Innovation Team. In this role, you won't just be managing data; you will be building the "brain" of our organization. You will own the entire product lifecycle—architecting high-performance data pipelines, implementing AI-driven logic, and developing the full-stack applications that bring these insights to life. If you are a builder who thrives at the intersection of Big Data and system engineering, we want you to help us turn complex data signals into our most competitive assets.
Key Responsibilities
- Solution Architecture: Translate high-level business requirements into technical specifications, utilizing Lakehouse architecture to ensure scalability and performance.
- End-to-End Software Development: Build and maintain full-stack applications, ensuring seamless integration between backends and frontend components.
- Frontend Development: Design and develop responsive, intuitive web interfaces and data visualizations that translate complex analytics into actionable user experiences.
- Data & AI Pipelines: Build and optimize declarative pipelines using Delta Live Tables (DLT) and orchestrate complex workflows to serve Machine Learning models.
- Operational Excellence: Manage the full application lifecycle, including governance via Unity Catalog, model tracking via MLflow, and CI/CD for both code and data.
- Cross-Functional Collaboration: Work closely with stakeholders to ensure successful project delivery, focusing on code modularity and maintainability within the Databricks ecosystem.
Technical Requirements
The Essentials:
- Education: Degree in Computer Science, Software Engineering, or a related field.
- Software Engineering: Deep mastery of Python (PySpark), Java, or C++. You should understand data-oriented design as well as you understand OOP.
- Full-Stack Mindset: Comfort moving from backend data logic to frontend API consumption.
The "Lakehouse" Stack (Highly Desired):
- Databricks Ecosystem: Hands-on experience with Delta Lake, Spark SQL, and Databricks Workflows.
- AI Operations: Experience using MLflow or Mosaic AI to deploy and monitor models in production.
- DevOps for Data: Familiarity with Databricks Asset Bundles (DABs), Git integration, and containerization (Docker).
Key Skills
Ranked by relevanceReady to apply?
Join Merquri and take your career to the next level!
Application takes less than 5 minutes

