Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are tech transformation specialists, uniting human expertise with AI to create scalable tech solutions.
With over 8,000 CI&Ters around the world, we’ve built partnerships with more than 1,000 clients during our 30 years of history. Artificial Intelligence is our reality.
The Junior Data Engineer will be the primary technical resource supporting the data, analytics, and AI roadmap for 2026 and beyond. The role spans data engineering, analytics engineering, and machine learning deployment, ensuring reliable pipelines, curated data models, and smooth operationalization of ML/AI solutions.
Key Responsibilities
Maintain & Operate Data Pipelines
Monitor, maintain, and troubleshoot pipelines running in BigQuery, Google Cloud Storage (GCS), and other cloud systems.
Ensure accuracy, reliability, and timeliness of data for reporting and analytics.
Investigate issues quickly and escalate where needed.
Build & Optimize Data Pipelines
Develop new ingestion and transformation pipelines to support reporting, analysis, forecasting, and AI workloads.
Optimize storage, compute, and transformation logic to manage and reduce cloud costs.
Improve testing, observability, and documentation across pipelines.
Support Data Modeling & Analytics Engineering
Assist in building curated models in Dataform to power reporting and dashboards.
Implement tests, data quality checks, and pipeline automation.
Collaborate with business teams to translate requirements into scalable data models.
Ensure underlying datasets are accurate and up to date.
Machine Learning & AI Deployment
Partner with Data Science to deploy forecasting, pricing, and AI models to production.
Build and maintain feature pipelines for ML models.
Support scheduling, monitoring, and alerting for deployed models.
Strategic Project Contributions
Revenue Management & Pricing: Build/maintain pricing pipelines and support automation for pricing optimization and approvals.
Forecasting (Wholesale & M2M Deployment): Build feature pipelines and assist in deploying/monitoring forecasting solutions for supply chain and planning.
AI & Agent Solutions: Support development/deployment of AI models and agent-based workflows; integrate outputs into business systems and dashboards.
Operations & Procurement: Develop/maintain datasets and pipelines supporting procurement, operations, supplier performance, and stock management visibility; improve data visibility and quality.
Training, Learning & Global Collaboration
UK Team Enablement: Contribute to training on BigQuery and engineering toolsets; document processes, datasets, and pipelines for scalability.
Global Alignment: Collaborate with global teams on data definitions, modeling methods, and shared datasets; follow global standards to ensure scalability across markets.
Must-Have Skills
Hands-on experience with Google BigQuery and Google Cloud Storage (GCS).
Fundamental SQL and data transformation skills.
Experience building/maintaining batch pipelines; familiarity with scheduling/monitoring (e.g., Cloud Scheduler, Airflow equivalents, or similar).
Basic understanding of Dataform (or similar tools like dbt) for curated data modeling.
Knowledge of data quality practices: testing, validation, observability.
Exposure to ML deployment concepts and feature pipelines (partnering with Data Science).
Strong troubleshooting, documentation, and collaboration skills.
Nice-to-Have Skills
Cost optimization in cloud environments (compute/storage tuning).
Experience with CI/CD for data pipelines.
Familiarity with GCP services beyond BigQuery/GCS (e.g., Pub/Sub, Cloud Functions, Cloud Composer).
Experience integrating AI outputs into dashboards and business systems.
Basic understanding of pricing optimization and forecasting workflows.
Our benefits:
-Health and dental insurance
-Meal and food allowance
-Childcare assistance
-Extended paternity leave
-Partnership with gyms and health and wellness professionals via Wellhub (Gympass) TotalPass;
-Profit Sharing and Results Participation (PLR);
-Life insurance
-Continuous learning platform (CI&T University);
-Discount club
-Free online platform dedicated to physical, mental, and overall well-being
-Pregnancy and responsible parenting course
-Partnerships with online learning platforms
-Language learning platform
And many more!
More details about our benefits here: https://ciandt.com/br/pt-br/carreiras
At CI&T, inclusion starts at the first contact. If you are a person with a disability, it is important to present your assessment during the selection process. See which data needs to be included in the report by clicking here.This way, we can ensure the support and accommodations that you deserve. If you do not yet have the assessment, don't worry: we can support you in obtaining it. We have a dedicated Health and Well-being team, inclusion specialists, and affinity groups who will be with you at every stage. Count on us to make this journey side by side.
Key Skills
Ranked by relevanceReady to apply?
Join CI&T and take your career to the next level!
Application takes less than 5 minutes

