Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
In this role, you will work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we bring deep technical and industry expertise to public and private sector clients around the world. You will be part of a team that delivers high‑impact solutions and drives adoption of modern data and cloud technologies.
Your Role And Responsibilities
The successful candidate will design, build, and maintain scalable and reliable data pipelines and platforms used across analytics, AI, and business systems. You will collaborate with cross-functional teams to ensure data is accessible, high-quality, and aligned with business goals. Responsibilities include:
- Designing, developing, and optimizing data processing systems, including ETL/ELT pipelines and data orchestration workflows
- Building and maintaining data pipelines for batch and real-time (streaming) use cases
- Working with data scientists, software engineers, and business stakeholders to deliver high‑quality, production‑ready data solutions
- Implementing and enforcing data quality, validation, and governance practices
- Ensuring compliance with data security standards, access controls, and regulatory requirements
- Monitoring data platform performance and implementing improvements to ensure reliability, scalability, and cost effectiveness
- Contributing to standardization, automation, and best practices across data engineering teams
Bachelor's Degree
Required Technical And Professional Expertise
- Strong Python skills for data processing, pipeline development, and automation
- Hands‑on experience with Apache Spark / PySpark for large‑scale distributed data processing
- Experience with Databricks and cloud platforms (AWS or Azure), including Delta Lake and related data management tools
- Proven experience designing, developing, and maintaining scalable ETL/ELT pipelines and data platform components
- Familiarity with building both batch and real‑time (streaming) data workflows, preferably with technologies like Kafka, Event Hubs, or Kinesis
- Experience with DevOps practices and Infrastructure as Code (Terraform preferred)
- Understanding of data modeling, data warehousing concepts, and modern data architectures (e.g., Lakehouse)
- Experience building or integrating with LLM‑powered or AI‑driven solutions
- Familiarity with FastAPI and Pydantic for service development
- Certification in AWS, Azure, or Databricks
- Knowledge of CI/CD pipelines for data workloads
Key Skills
Ranked by relevanceReady to apply?
Join IBM and take your career to the next level!
Application takes less than 5 minutes

