Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
You will collaborate closely with cross-functional teams, including data scientists, analysts, and software engineers, ensuring smooth data flow and optimizing our services.
Requirements
- Experience with using GenAI/LLM based tools for development (e.g. Windsurf, Cursor, Copilot, etc)
- Bachelor's degree in Computer Science, Engineering, or any related field
- 4+ years of professional experience as a Backend Engineer
- Full proficiency in Java (Spring Boot) and Kafka, with a proven track record of working with micro-services architecture and streaming solutions
- Proficiency in AWS services for data storage, processing, and analytics
- Familiarity with Apache Spark
- Experience with databases designed for big data and large-scale systems (Snowflake or Cassandra, but also other similar technologies)
- Hands-on experience working with massive datasets
- Strong experience in designing and constructing ETL processes for data transformation and integration - advantage
- Demonstrated ability to challenge decisions and work independently
- Strong problem-solving skills and meticulous attention to detail
- Familiarity with Python or NodeJS
- Utilize Java, including Spring Boot, to build robust and high-performance data processing services within our data platform
- Implement real-time data streaming solutions using Kafka, ensuring timely data ingestion and availability
- Collaborate closely with cross-functional teams to comprehend data requirements, identify opportunities for data optimization, and support data-driven initiatives
- Lead the design, development, and maintenance of efficient and scalable data pipelines, facilitating data collection, processing, and transformation from diverse sources
- Leverage AWS services for data storage, processing, and analytics, adhering to security and performance best practices
- Monitor and troubleshoot service performance, proactively identifying bottlenecks and implementing optimizations
- Uphold data integrity, reliability, and availability by implementing effective ETL processes and conducting data quality checks
- Direct cooperation with the already successful, long-term, and growing project
- Truly competitive salary
- Help and support from our caring HR team
Key Skills
Ranked by relevanceReady to apply?
Join Globaldev Group and take your career to the next level!
Application takes less than 5 minutes