Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Responsibilities
- Contribute to the design and implementation of a data mesh architecture using GraphQL APIs to expose domain-owned data products.
- Build and maintain a modern AWS-based data lake using S3, Glue, Lake Formation, Athena, and Redshift.
- Develop and optimize ETL/ELT pipelines using AWS Glue and PySpark to support batch and streaming data workloads.
- Implement AWS DMS pipelines to replicate data into Aurora PostgreSQL for near real-time analytics and reporting.
- Support data governance, quality, observability, and API design best practices.
- Collaborate with product, engineering, and analytics teams to deliver robust, reusable data solutions.
- Contribute to automation and CI/CD practices for data infrastructure and pipelines.
- Stay current with emerging technologies and industry trends to help evolve the platform.
- Bachelor’s degree in a technical field such as Computer Science or Mathematics.
- At least 4 years of experience in data engineering, with demonstrated ownership of complex data systems.
- Solid experience with AWS data lake technologies (S3, Glue, Lake Formation, Athena, Redshift).
- Understanding of data mesh principles and decentralized data architecture.
- Proficiency in Python, SQL
- Experience with data modeling, orchestration tools (e.g., Airflow), and CI/CD pipelines.
- Strong communication and collaboration skills.
- Master’s degree, especially with a focus on data engineering, distributed systems, or cloud architecture.
- Hands-on experience in infrastructure-as-code tools (e.g., Terraform, CloudFormation).
- Expertise in AWS Glue and PySpark for scalable ETL/ELT development.
- Experience with event-driven architectures (e.g., Kafka, Kinesis).
- Familiarity with data cataloging and metadata management tools.
- Knowledge of data privacy and compliance standards (e.g., GDPR, HIPAA).
- Background in agile development and DevOps practices.
As set forth in Suvoda’s Equal Employment Opportunity policy, we do not discriminate on the basis of any protected group status under any applicable law.
If you are based in California, we encourage you to read this important information for California residents linked here.
Key Skills
Ranked by relevanceReady to apply?
Join Suvoda and take your career to the next level!
Application takes less than 5 minutes

