Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
About the Role:
We are looking for a passionate Data Engineer to join our growing data team, who will transform raw data into meaningful and reliable data products. In this role, you will play a key part in modernizing traditional ETL processes and architecting next-generation data platforms using big data technologies like Spark and Flink.
Responsibilities:
- Design, develop, and manage end-to-end, efficient, and scalable data pipelines using big data processing technologies like Spark and Flink, as well as industry-standard ETL tools
- Create flexible data models and data warehousing solutions to support analytics and reporting processes that meet business needs
- Analyze existing ETL/ELT processes and SQL queries to implement improvements, optimize resource consumption, and enhance performance
- Collaborate closely with data scientists, analysts, and business units to understand data requirements and deliver high-quality data products
- Ensure the implementation of data governance and security standards, including data quality, data lineage, and reliability
- Stay current with new technologies and approaches in the data field and proactively recommend improvements to the existing infrastructure
Required Qualifications:
- Bachelor's degree in Computer Science, Management Information Systems (MIS), Mathematics, or a related field
- 3+ years of hands-on experience in Data Engineering or Data Warehousing
- Proven experience in developing large-scale data pipelines and ETL/ELT workflows using Python and Spark
- Hands-on experience with workflow scheduling platforms such as Airflow, Dagster, or similar technologies
- Advanced proficiency in SQL and experience with procedural SQL languages such as Oracle PL/SQL
- Experience working with structured and semi-structured data formats like Parquet, Avro, and JSON
- In-depth knowledge of modern data architectures such as data lakes, data lakehouses, and core data modeling principles
- Experience with at least one industry-standard ETL tool such as ODI, Informatica, or Talend
Preferred Qualifications (Nice to have):
- Experience with data processing and optimization in cloud environments (AWS, Azure, GCP), with a preference for GCP
- Experience with real-time data processing and streaming technologies, such as Kafka
- Familiarity with workflow management platforms like Airflow
- Knowledge of containerization technologies like Docker and Kubernetes, and CI/CD processes
- Familiarity with BI tools such as Power BI
- Experience in the banking or finance industry is a significant advantage
Personal Attributes:
- Excellent communication skills in English, both written and verbal
- Strong analytical thinking, problem-solving skills, and a results-oriented mindset
- A team player with the ability to communicate effectively with stakeholders at all technical levels
- Detail-oriented with a commitment to delivering high-quality work
Follow us:
- LinkedIn: https://www.linkedin.com/company/innovance-consultancy
- LinkedIn: https://www.linkedin.com/company/dataspecta
- Instagram: https://www.instagram.com/innovanceconsultancy
- Instagram: https://www.instagram.com/dataspecta
Key Skills
Ranked by relevanceReady to apply?
Join Innovance Consultancy and take your career to the next level!
Application takes less than 5 minutes

