Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Backend Engineer (Java + Go + Microservices + LLM Infra) | Experience-6+ Years | Immediate Joiners - 30 Days(Official)
We are hiring a Backend engineer who has touched Go and deployed LLMs in the cloud.
🔧 Must-Haves (Primary Focus)
- Strong Golang backend engineering experience in high-scale, production microservices environments
- Deep hands-on knowledge of Go concurrency (goroutines, channels, worker pools, context, synchronization primitives)
- Proven experience designing and operating microservices architectures (service boundaries, APIs, resilience, observability)
- Cloud-native development on AWS (preferred); GCP/Azure acceptable
- Kubernetes expertise including containerization, deployment, autoscaling (HPA), and service orchestration
- gRPC and/or REST for inter-service communication
- Kafka (or equivalent) for event-driven systems
- Redis (or similar) for caching, rate limiting, and performance optimization
- LLM deployment exposure (Hugging Face, self-hosted, or managed LLM APIs)
🔧 Also Required (Secondary)
- Strong Java backend experience in product-based or high-scale systems
- Experience building Java microservices using modern frameworks (Spring Boot, Micronaut, etc.)
- Understanding of concurrency, JVM performance, and memory management fundamentals
🎯 Good-to-Have / Bonus
- Go side projects or open-source contributions on GitHub
- Experience with AI/LLM cost optimization (GPU/CPU trade-offs, batching, inference efficiency)
- Knowledge of distributed caching strategies, traffic shaping, and rate control
- Exposure to scaling AI inference services in production
- Distributed caching & rate control
Key Skills
Ranked by relevanceReady to apply?
Join Recro and take your career to the next level!
Application takes less than 5 minutes

