Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Accountabilities
- Design, build, and optimize ETL/ELT workflows using Databricks, SQL, and Python/PySpark.
- Develop and maintain robust, scalable, and efficient data pipelines for processing large datasets.
- Work on cloud platforms (Azure, AWS) to build and manage data lakes and scalable architectures.
- Utilize cloud services like Azure Data Factory and AWS Glue for data processing.
- Use Databricks for big data processing and analytics.
- Leverage Apache Spark for distributed computing and data transformations.
- Create and manage SQL-based data solutions ensuring scalability and performance.
- Develop and enforce data quality checks and validations.
- Collaborate with cross-functional teams to deliver impactful data solutions.
- Leverage CI/CD pipelines to streamline development and deployment of workflows.
- Maintain clear documentation for data workflows and optimize data systems.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
- 3–6 years of experience in Data Engineering or related roles.
- Hands-on experience with big data processing frameworks and data lakes.
- Proficiency in Python, SQL, and PySpark for data manipulation.
- Experience with Databricks and Apache Spark.
- Knowledge of cloud platforms like Azure and AWS.
- Familiarity with ETL tools (Alteryx is a plus).
- Strong understanding of distributed systems and big data technologies.
- Basic understanding of DevOps principles and CI/CD pipelines.
- Hands-on experience with Git, Jenkins, or Azure DevOps.
- Flexible remote working conditions.
- Opportunities for professional growth and training.
- Collaborative and inclusive company culture.
- Access to modern technologies and tools.
- Health and wellness benefits.
- Work-life balance.
- Participation in innovative projects.
- Dynamic and fast-paced working environment.
We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team.
We appreciate your interest and wish you the best!
Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time.
We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Key Skills
Ranked by relevanceReady to apply?
Join Jobgether and take your career to the next level!
Application takes less than 5 minutes

