Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
About the Company
✨ Strong work-life balance✨ Stable and low-turnover team✨ Positive team culture
About the Role
This is a role in a newly established team within our Big Data organization to drive multiple greenfield initiatives. A primary focus is developing a data fabric platform that delivers unified API access across disparate data stores, supporting high-concurrency workloads with sub-second latency requirements at scale.
Responsibilities
- Architect and implement the core routing and query execution engine for the data fabric platform
- Develop robust connection management infrastructure spanning heterogeneous data stores (Kafka, MySQL, Flink, Delta/Iceberg)
- Engineer end-to-end request pipelines optimized for P99 latency targets and high-throughput requirements
- Design and implement fault-tolerant systems including error handling, retry mechanisms, and circuit breaker patterns
- Establish comprehensive observability framework incorporating structured logging, distributed tracing, and metrics collection
- Drive API design decisions and define service contracts in collaboration with cross-functional stakeholders
Qualifications
- Minimum 3 years of professional experience developing low-latency, high-throughput services using C++, Go, or Rust in production environments
- Demonstrated expertise in concurrency patterns, asynchronous I/O, connection pool management, and backpressure handling mechanisms
- Production-level experience with Kafka, MySQL, Apache Flink, Delta Lake/Iceberg
- Proven track record optimizing systems for low-latency requirements (P99)
- Extensive experience designing and implementing services using gRPC, Protocol Buffers, or equivalent RPC frameworks at scale
- Demonstrated experience deploying and operating services managing 100K+ concurrent connections with comprehensive observability infrastructure
Preferred Skills
- Experience with query parsing, optimization, and abstract syntax tree (AST) manipulation
- Implementation experience with adaptive rate limiting or circuit breaker patterns
- Knowledge of zero-copy techniques, memory-mapped I/O, or other advanced performance optimization strategies
- Background in stream processing frameworks and real-time data pipeline architectures
Key Skills
Ranked by relevanceReady to apply?
Join Trulyyy and take your career to the next level!
Application takes less than 5 minutes

