Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Salv builds technology that helps financial institutions detect and prevent financial crime. The systems you support are used every day to identify suspicious behaviour, stop bad actors, and help regulated companies meet strict compliance expectations.
As product usage has grown, so has the volume and importance of our data. Our teams rely on this data for analytics, product decisions, customer insights, detection quality, and operational workflows. Today, the product generates far more data than we can comfortably analyse or use.
Your mission is to build the data foundations that let the company use this information effectively. You will help internal analytics move quickly, support product teams with reliable insights, and unlock new data-driven value for customers.
This involves delivering improvements using the current stack while shaping a long-term data platform that scales with increasing usage and business demands.
Your role
As a Data Engineer, you will:
- Design, build, and maintain data pipelines (today we are using S3, Glue, and Redshift).
- Improve the reliability, speed, and quality of data used by analytics, product, and operational teams.
- Work with existing tooling to deliver value quickly, while identifying longer-term improvements to the data platform.
- Develop data models, storage patterns, and processing flows that support both internal analysis and customer-facing features.
- Collaborate with Analytics and Data Scientist to understand data needs and remove bottlenecks.
- Partner with Product and Engineering to ensure the data platform supports product insights, reporting, and new capabilities.
- Introduce technologies, patterns, and approaches when they provide clear value and support future scale.
- Improve data discoverability, documentation, schema consistency, and internal tooling around data access.
- Build systems that support both day-to-day operational needs and long-term strategic goals.
- Over time, and if the impact is clear, this role may expand into shaping or leading a dedicated data engineering function.
Must-have
- 5+ years of experience as a Data Engineer or in a similar role.
- Strong experience with AWS data services, S3, Glue, and Redshift or equivalent stack.
- Experience designing and maintaining data pipelines for analytics or operational use.
- Solid SQL skills and experience with data modelling and performance optimisation.
- Ability to work with structured and semi-structured datasets.
- Practical approach to data quality, lineage, and maintainability.
- Clear communication skills and the ability to work with analytics, engineering, and product teams.
- Comfortable balancing quick improvements with long-term platform design.
- Comfortable working alone and within the team.
- Experience with Athena.
- Familiarity with tools such as dbt, Airflow, Dagster, or similar platforms that may be relevant as the data stack evolves.
- Experience building internal data frameworks or self-service tooling.
- Background supporting customer-facing reporting or analytics features.
- Understanding of event-driven or streaming systems.
- Awareness of ML pipelines or feature engineering.
- Storage: AWS S3 (primary data lake)
- Data processing: AWS Glue
- Warehouse: Amazon Redshift
- Orchestration: AWS-native workflows (with future evolution expected)
- Analytics: Internal dashboards, SQL-based workflows
- Tooling: Python, SQL, internal scripts
- Future potential: Platforms such as dbt, Airflow, or Dagster may be considered as the data platform evolves
- Improve the internal analytics workflow by making key datasets easier and faster to access.
- Deliver the first reliable set of pipelines and models that reduce manual data preparation and improve consistency.
- Establish clearer structure, naming, and quality standards for core datasets used across the organisation.
- Identify the major bottlenecks in the current data platform and propose a realistic roadmap for addressing them.
- Produce early foundations of a scalable data platform (e.g. first redesigns, improved data models, clearer ingestion patterns).
- Improve data availability for several high-value product or customer reporting areas.
- Provide early product usage insights that help product and leadership teams understand behaviours and prioritise improvements.
- Build strong working relationships with analytics, product, and engineering teams so data work flows smoothly.
- Enjoy building systems with both immediate and long-term impact.
- Balance fast delivery with thoughtful, maintainable design.
- Communicate clearly and work well with different disciplines.
- Take ownership and can drive improvements without close supervision.
- Prefer simple, practical solutions over unnecessary complexity.
Our mission is to beat financial crime, and we help our clients to prevent money laundering, terrorist financing, and fraud. We are expanding our business in Europe.
Key Skills
Ranked by relevanceReady to apply?
Join Salv and take your career to the next level!
Application takes less than 5 minutes

