Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
WHY JOIN US
If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!
ABOUT THE ROLE
We are looking for a Senior Data Engineer to take ownership of our data infrastructure, designing and optimizing high-performance, scalable solutions. You’ll work with AWS and big data frameworks like Hadoop and Spark to drive impactful data initiatives across the company.
WHAT YOU WILL DO
- Design, build, and maintain large-scale data pipelines and data processing systems in AWS;
- Develop and optimize distributed data workflows using Hadoop, Spark, and related technologies;
- Collaborate with data scientists, analysts, and product teams to deliver reliable and efficient data solutions;
- Implement best practices for data governance, security, and compliance;
- Monitor, troubleshoot, and improve the performance of data systems and pipelines;
- Mentor junior engineers and contribute to building a culture of technical excellence;
- Evaluate and recommend new tools, frameworks, and approaches for data engineering.
MUST HAVES
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field;
- 5+ years of experience in data engineering, software engineering, or related roles;
- Strong hands-on expertise with AWS services (S3, EMR, Glue, Lambda, Redshift, etc.);
- Deep knowledge of big data ecosystems, including Hadoop (HDFS, Hive, MapReduce) and Apache Spark(PySpark, Spark SQL, streaming);
- Strong SQL skills and experience with relational and NoSQL databases;
- Proficiency in Python, Java, or Scala for data processing and automation;
- Experience with workflow orchestration tools (Airflow, Step Functions, etc.);
- Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts;
- Excellent problem-solving skills and ability to work in fast-paced environments;
- Ability to work German TimeZone (:6 - 7 am to : 2-3 pm Brazil/ ART time),
- Upper-Intermediate English level.
NICE TO HAVES
- Experience with real-time data streaming platforms (Kafka, Kinesis, Flink);
- Knowledge of containerization and orchestration (Docker, Kubernetes);
- Familiarity with data governance, lineage, and catalog tools;
- Previous leadership or mentoring experience.
PERKS AND BENEFITS
- Professional growth: Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
- Competitive compensation: We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
- A selection of exciting projects: Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
- Flextime: Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
Key Skills
Ranked by relevanceReady to apply?
Join AgileEngine and take your career to the next level!
Application takes less than 5 minutes