Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
We are seeking highly skilled and motivated Data Engineers to join our growing data team. The ideal candidates will be responsible for building and maintaining scalable data pipelines, managing data architecture, and enabling data-driven decision-making across the organization. The roles require hands-on experience with cloud platforms, specifically AWS and/or Azure, including proficiency in their respective data and analytics services as follows:
Amazon Web Services (AWS):
- Experience with AWS Glue for ETL/ELT processes
- Familiarity with Amazon Redshift, Athena, S3, and Lake Formation
- Use of AWS Lambda, Step Functions, and CloudWatch for data pipeline orchestration and monitoring
- Exposure to Amazon Kinesis or Kafka on AWS for real-time data streaming
- Knowledge of IAM, VPC, and security practices in AWS data environments
- Experience with Azure Data Factory (ADF)/Synapse for data integration and orchestration
- Familiarity with Azure Synapse Analytics, Azure Data Lake Storage (ADLS), and Azure SQL Database
- Hands-on with Databricks on Azure and Apache Spark for data processing and analytics
- Exposure to Azure Event Hubs, Azure Functions, and Logic Apps
- Understanding of Azure Monitor, Log Analytics, and role-based access control
Contract type: Employment or collaboration contract
Requirements
- Design, develop, and maintain robust and scalable data pipelines to ingest, transform, and store data from diverse sources
- Optimize data systems for performance, scalability, and reliability in a cloud-native environment
- Work closely with data analysts, data scientists, and other stakeholders to ensure high data quality and availability
- Develop and manage data models using DBT, ensuring modular, testable, and well-documented transformation layers
- Implement and enforce data governance, security, and privacy standards
- Manage and optimize cloud data warehouses, especially Snowflake, for performance, cost-efficiency, and scalability
- Monitor, troubleshoot, and improve data workflows and ETL/ELT processes
- Collaborate in the design and deployment of data lakes, warehouses, and lakehouse architectures
- 3+ years of experience as a Data Engineer or in a similar role
- Strong proficiency in SQL and Python
- Solid understanding of data modeling, ETL/ELT processes, and pipeline orchestration
- Experience working in DevOps environments using CI/CD tools (e.g., GitHub Actions, Azure DevOps)
- Knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes, Airflow)
- Familiarity with data cataloging tools like AWS Glue Data Catalog or Azure Purview
- Strong interpersonal and communication skills—able to collaborate with cross-functional teams and external clients
- Adaptability in fast-paced environments with shifting client needs and priorities
- Analytical mindset with attention to detail and a commitment to delivering quality results
Key Skills
Ranked by relevanceReady to apply?
Join Tecknoworks and take your career to the next level!
Application takes less than 5 minutes