Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Why Join Exadel
We’re an AI-first global tech company with 25+ years of engineering leadership, 2,000+ team members, and 500+ active projects powering Fortune 500 clients, including HBO, Microsoft, Google, and Starbucks.
From AI platforms to digital transformation, we partner with enterprise leaders to build what’s next.
What powers it all? Our people are ambitious, collaborative, and constantly evolving.
About The Client
Our client is one of the Big Four accounting firms and the world’s largest professional services network. Headquartered in London, they operate in 150+ countries with 460,000+ professionals delivering excellence in audit, tax, consulting, and advisory.
About The Project
As a Data Engineer, you’ll become a part of a cross-functional development team who is working with GenAI solutions for digital transformation across Enterprise Products.
The prospective team you will be working with is responsible for the design, development, and deployment of innovative, enterprise technology, tools, and standard processes to support the delivery of tax services. The team focuses on the ability to deliver comprehensive, value-added, and efficient tax services to our clients. It is a dynamic team with professionals of varying backgrounds from tax technical, technology development, change management, and project management. The team consults and executes on a wide range of initiatives involving process and tool development and implementation including training development, engagement management, tool design, and implementation.
Project Tech Stack
Azure Cloud, Microservices Architecture, .NET 8, ASP.NET Core services, Python, Mongo, Azure SQL, Angular 18, Kendo, GitHub Enterprise with Copilot
What You’ll Do
- Responsible for the building, deployment, and maintenance of mission-critical analytics solutions that process terabytes of data quickly at big-data scales
- Prepare and process complex datasets ready to use by Data Science
- Process data for vector DBs
- Ensure real-time monitoring for data pipeline health and quality
- Implement AI-specific security protocols
- Collaborate with Data Science, DevOps and Security teams
- 6+ years of background in data engineering
- Experience coding in SQL/Python, with solid CS fundamentals including data structure and algorithm design
- Hands-on implementation experience working with a combination of the following technologies: Hadoop, Map Reduce, Kafka, Hive, Spark, SQL and NoSQL data warehouses
- Skills in working with vector databases (Milvus, Postgres, etc.)
- Knowledge of Data Warehousing, design, implementation and optimization
- Practice in Data Quality testing, automation and results visualization
- Competency in Azure cloud data platform
- Knowledge of embedding models and retrieval-augmented generation (RAG) architectures
- Strong analytical and problem-solving abilities with a detail-oriented mindset
- Pragmatic approach to balancing process against flexibility in achieving objectives
- Excellent organizational skills, including the capacity to self-manage, structure work, set priorities, and work to deadlines
- Excellent troubleshooting and communication skills
- Understanding of LLM pipelines, including data preprocessing for GenAI models
- Expertise deploying data pipelines for AI/ML workloads, ensuring scalability and efficiency
- Familiarity with model monitoring, feature stores (Feast, Vertex AI Feature Store), and data versioning
- Experience with CI/CD for ML pipelines (Kubeflow, MLflow, Airflow, SageMaker Pipelines)
- Understanding of real-time streaming for ML model inference (Kafka, Spark Streaming)
- Experience with supporting data scientists and complex statistical use cases highly desirable
Intermediate+
Legal & Hiring Information
- Exadel is proud to be an Equal Opportunity Employer committed to inclusion across minority, gender identity, sexual orientation, disability, age, and more
- Reasonable accommodations are available to enable individuals with disabilities to perform essential functions
- Please note: this job description is not exhaustive. Duties and responsibilities may evolve based on business needs
Key Skills
Ranked by relevanceReady to apply?
Join Exadel and take your career to the next level!
Application takes less than 5 minutes

