Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Project Overview:
Botsford Associates is supporting the Chief Data Office of a leading financial institution on a strategic initiative to develop a centralized information management platform. This platform will enhance the bank's ability to "Know Your Data, Govern Your Data, and Use Your Data” by establishing a common business language, clarifying ownership and accountability, and enabling greater transparency and trust in enterprise data.
Role Summary:
We are seeking a Senior Data Engineer to design and implement scalable, reusable data pipelines and foundational frameworks that will power this enterprise-wide transformation. This individual will work across business and technology stakeholders to ensure clean, trusted, and ready-to-use data is ingested and maintained within the new platform, while adhering to data governance standards and technical best practices.
Key Responsibilities:
- Develop scalable ETL pipelines using Python to support ingestion, transformation, standardization, and delivery of clean data to the centralized platform.
- Integrate data from multiple sources, including S3-based storage systems and external/internal REST APIs.
- Build reusable components for data validation, control logging, exception handling, and quality checks.
- Design service layers to enable seamless data consumption by downstream analytics and business systems.
- Collaborate with data stewards and business analysts to understand critical data elements and ensure proper lineage and metadata integration.
- Optimize data structures (e.g., Parquet) and pipeline performance across large-scale datasets.
- Contribute to the onboarding and configuration of enterprise data tooling (e.g., data catalogs, lineage tracking, data quality monitoring).
- Ensure compliance with enterprise architecture and data governance policies throughout the development lifecycle.
Required Skills & Experience
- Expert-level experience in core Python development, particularly for back-end and data engineering use cases.
- Proficiency with data ingestion and integration, including extracting data from S3 storage systems and REST APIs.
- Solid experience in ETL pipeline design for data transformation, harmonization, and standardization.
- Background in building service layers/APIs to expose data to consumer applications.
- Familiarity with Parquet format and optimizing it for performance in large-scale data environments.
- Knowledge of metadata integration, data lineage, and governance frameworks is a strong asset.
Key Skills
Ranked by relevanceReady to apply?
Join Botsford Associates and take your career to the next level!
Application takes less than 5 minutes

