Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We’re seeking a Data Engineer to join our growing AI team. You’ll be responsible for building and maintaining data pipelines, optimizing data flow, and ensuring high data quality across multiple sources and systems. You’ll work closely with engineering and AI teams to deliver reliable datasets that enable advanced machine learning, analytics, and automation.
Key Responsibilities:
- Design, build, and maintain scalable data pipelines for batch and real-time processing.
- Develop robust ETL workflows that process, clean, and transform data from multiple sources.
- Integrate APIs and external data services to expand our data ecosystem.
- Collaborate with machine learning and software engineering teams to ensure smooth data delivery for model training and application development.
- Implement data monitoring, testing, and alerting to guarantee data integrity and performance.
- Develop and optimize database schemas, warehousing solutions, and caching strategies.
- Participate in continuous improvement by identifying and resolving data bottlenecks and performance issues.
- Stay up to date with modern data engineering practices, tools, and technologies.
Qualifications:
- Strong experience in building and managing data pipelines using Python, Airflow, or similar frameworks.
- Proficiency with SQL and NoSQL databases, including data modeling and query optimization.
- Hands-on experience with cloud-based data tools and storage (AWS Glue, GCP BigQuery, Azure Data Factory, or similar).
- Familiarity with streaming technologies such as Kafka or Spark Streaming.
- Experience integrating with APIs and building data ingestion frameworks.
- Knowledge of containerization (Docker, Kubernetes) and infrastructure as code (Terraform, Ansible).
- Understanding of data governance, security, and compliance best practices.
- Bonus: experience collaborating with ML engineers or supporting machine learning pipelines.
What We Offer:
- Meaningful Work: Build the data foundations powering real-world AI solutions.
- Professional Growth: Work with cutting-edge stack and gain exposure to AI and automation systems.
- Flexible Working: Remote or hybrid arrangements available.
- Collaborative Culture: Join a multidisciplinary team building technology that makes an impact.
- Compensation: Competitive salary and equity.
Why DeepQuery:
At DeepQuery, we are redefining how AI transforms business operations. Our intelligent systems automate workflows, streamline decision-making, and empower organizations to scale faster. This role gives you a front-row seat in shaping our data infrastructure and driving the success of our AI platform from the ground up.
Key Skills
Ranked by relevanceReady to apply?
Join DeepQuery and take your career to the next level!
Application takes less than 5 minutes

