Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
- We are looking for a Poland based candidate.
Our secret?
A culture that’s fast, flexible, and fiercely entrepreneurial. We move quickly, think creatively, and always put our people first.
We’re passionate about growth — both for our clients and ourselves — and that means attracting the very best talent to join us on this exciting journey.
We’re Proud To Be
- Trailblazers in banking, payments, capital markets, wealth, and asset management
- Champions of an agile, nimble, and innovative work environment
- Dedicated to building a team of top-notch professionals who share our drive and vision
- Developing and maintaining data pipelines using Python and SQL.
- Strong Python Developer
- Building new integrations including external providers APIs as well as internal systems.
- Developing and managing data storage solutions including data warehouses and relational databases.
- Performing initial data analysis and data quality checks. Ensuring data quality and integrity.
- Monitoring data infrastructure for performance, security, and reliability.
- Collaborating with data scientists and various IT teams ensuring business aligned, compliant and secure integration with existing systems.
- Streamlines and automates data science products lifecycle from development to deployment and monitoring.
- Designs, develops and maintains tools enabling reproducible experimentation, models versioning, automated deployment, serving etc.
- Tests, refactors, optimizes, and packages the code developed by data scientists. Improves, deploys, monitors and maintains developed models
- 3+ years of experience developing ETL processing pipelines using Python and SQL.
- Proficiency in data modelling and database management. Experience managing RDBMS (preferably PostgreSQL).
- Proficiency with Linux (especially RedHat distribution) and version control (git)
- Familiarity with DevOps pipelines (especially Azure DevOps Services)
- Familiarity with containerization technologies (docker/podman)
- Eager to learn
- Screening call with the Recruiter
- Hiring Manager Technical Interview
- Client Interview
- Feedback/Offer
We have been informed of several recruitment scams targeting the public. We strongly advise you to verify identities before engaging in recruitment related communication. All official Capco communication will be conducted via a Capco recruiter.
Key Skills
Ranked by relevanceReady to apply?
Join Capco and take your career to the next level!
Application takes less than 5 minutes

