Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are seeking a Lakehouse Platform Engineer to serve as the lead architect and custodian of our enterprise data backbone. In this pivotal consultancy role, you will be responsible for the health, scalability, and evolution of our entire technology stack, ensuring that our Lakehouse architecture is not only reliable but optimized for high-performance AI and analytics. You will own the lifecycle of AWS Glue jobs, manage the intricacies of Apache Iceberg table registries, and operationalize industry-leading tools like OpenMetadata for governance and Soda Core for data quality. From designing robust Disaster Recovery (DR) scenarios to automating infrastructure via Terraform, your work will provide the foundation upon which our Data Factory squads build the future of intelligent enterprise systems.
As a consultant within our specialist firm, your technical prowess is matched by your ability to drive adoption. You will act as a bridge between platform engineering and business-critical operations, "selling" the value of self-service tooling and automated lineage to senior stakeholders. We are looking for a master problem-solver with 8+ years of experience—ideally within financial services or banking—who thrives in fast-paced, Agile environments. If you are passionate about decommissioning legacy technical debt while building state-of-the-art, automated data platforms that meet aggressive migration targets, you will find your home at DeepLight AI.
Your responsibilities as the AWS Lakehouse Platform Engineer:
- Platform Management & Optimization
- Manage and maintain Lakehouse components:
- Storage: S3 bucket configurations, lifecycle policies, storage optimization
- Compute: AWS Glue job management, optimization, DPU allocation
- Catalog: AWS Glue Data Catalog and Iceberg table registry
- Semantic Layer: Deploy and integrate with our semantic layer
- Governance: Deploy, configure, and upgrade OpenMetadata
- Quality: Maintain Soda Core infrastructure and integration
- Apply Iceberg best practices for table optimization and maintenance
- Implement automated table maintenance processes
- Self-Service & Automation
- Deploy self-service tooling for Data Factory squads
- Implement automated lineage capture from Glue jobs
- Configure audit logging (CloudTrail, S3 access logs)
- Disaster Recovery & Reliability
- Design and implement Disaster Recovery (DR) scenarios including failover, backup management, and runbooks
- Execute annual DR tests for the entire data landscape
- Ensure all critical platform components are monitored, with alerting and active follow-up
- Migration & Decommissioning
- Execute decommissioning roadmap
- Collaboration
- Work closely with platform engineers and architects to ensure alignment on optimization and tooling
- Partner with operational teams for monitoring and alerting
While technical mastery is the foundation of what we do, the ability to bridge the gap between complex data science and actionable business value is what defines your success with Deeplight.
We're looking for individuals who are not only world-class in their fields of specialism, but also compelling communicators and persuasive advocates for their own skills.
You will be the face of our firm, tasked with building trust, articulating the "why" behind your technical decisions, and effectively "selling" your vision to high-level stakeholders.
If you thrive on the challenge of presenting cutting-edge solutions as much as you do on building them, you will fit right in.
Requirements
You will have experience in:
- data platform engineering or related roles, ideally a minimum of 8 years experience.
- managing Lakehouse platforms on AWS
- AWS services: S3, Glue, CloudTrail, Athena
- Apache Iceberg, OpenMetadata, and Soda Core
- disaster recovery planning and execution
- infrastructure automation using Terraform and Git
- Kafka (MSK) and OpenSearch
- identifying ways to automate their work / repetitive tasks
- problem-solving and troubleshooting skills
- working cross-functionally and manage complex platform operations
- working in a fast-paced environment and deliver aggressive migration targets
- collaborating and communication skills
- working with Jira and agile way of working
While technical mastery is the foundation of what we do, the ability to bridge the gap between complex data science and actionable business value is what defines your success with Deeplight.
We're looking for individuals who are not only world-class in their fields of specialism, but also compelling communicators and persuasive advocates for their own skills.
You will be the face of our firm, tasked with building trust, articulating the "why" behind your technical decisions, and effectively "selling" your vision to high-level stakeholders.
If you thrive on the challenge of presenting cutting-edge solutions as much as you do on building them, you will fit right in.
Benefits
Benefits & Growth Opportunities:
- Competitive salary and performance bonuses
- Comprehensive health insurance
- Professional development and certification support
- Opportunity to work on cutting-edge AI projects
- Flexible working arrangements
- Career advancement opportunities in a rapidly growing AI company
At DeepLight AI, we recognise that diversity drives innovation. We are committed to fostering an inclusive environment where individuals with different thinking styles can thrive and contribute their unique strengths to our specialised AI and data solutions.
Our goal is to ensure our application and interview process is accessible, predictable, and fair for all candidates.
If you require any specific adjustments to the application process, or if you require any reasonable adjustments should you be successful in being processed to the interview stage, please do let us know. This information will be kept strictly confidential and will not impact hiring decisions.
Key Skills
Ranked by relevanceReady to apply?
Join Deeplight AI and take your career to the next level!
Application takes less than 5 minutes

