Track This Job
Add this job to your tracking list to:
- Monitor application status and updates
- Change status (Applied, Interview, Offer, etc.)
- Add personal notes and comments
- Set reminders for follow-ups
- Track your entire application journey
Save This Job
Add this job to your saved collection to:
- Access easily from your saved jobs dashboard
- Review job details later without searching again
- Compare with other saved opportunities
- Keep a collection of interesting positions
- Receive notifications about saved jobs before they expire
AI-Powered Job Summary
Get a concise overview of key job requirements, responsibilities, and qualifications in seconds.
Pro Tip: Use this feature to quickly decide if a job matches your skills before reading the full description.
Come work for the team that brought to you NCCL, NVSHMEM & GPUDirect. Our GPU communication libraries are crucial for scaling Deep Learning and HPC applications! We are looking for a motivated Performance engineer to influence the roadmap of our communication libraries. The DL and HPC applications of today have a huge compute demand and run on scales which go up to tens of thousands of GPUs. The GPUs are connected with high-speed interconnects (eg. NVLink, PCIe) within a node and with high-speed networking (eg. Infiniband, Ethernet) across the nodes. Communication performance between the GPUs has a direct impact on the end-to-end application performance; and the stakes are even higher at huge scales! This is an outstanding opportunity for someone with HPC and performance background to advance the state of the art in this space. Are you ready for to contribute to the development of innovative technologies and help realize NVIDIA's vision?
What You Will Be Doing
- Conduct in-depth performance characterization and analysis on large multi-GPU and multi-node clusters.
- Study the interaction of our libraries with all HW (GPU, CPU, Networking) and SW components in the stack
- Evaluate proof-of-concepts, conduct trade-off analysis when multiple solutions are available
- Triage and root-cause performance issues reported by our customers
- Collect a lot of performance data; build tools and infrastructure to visualize and analyze the information
- Collaborate with a very dynamic team across multiple time zones
- M.S. (or equivalent experience) or PHD in Computer Science, or related field with relevant performance engineering and HPC experience
- 3+ yrs of experience with parallel programming and at least one communication runtime (MPI, NCCL, UCX, NVSHMEM)
- Experience conducting performance benchmarking and triage on large scale HPC clusters
- Good understanding of computer system architecture, HW-SW interactions and operating systems principles (aka systems software fundamentals)
- Implement micro-benchmarks in C/C++, read and modify the code base when required
- Ability to debug performance issues across the entire HW/SW stack. Proficient in a scripting language, preferably Python
- Familiar with containers, cloud provisioning and scheduling tools (Kubernetes, SLURM, Ansible, Docker)
- Adaptability and passion to learn new areas and tools. Flexibility to work and communicate effectively across different teams and timezones
- Practical experience with Infiniband/Ethernet networks in areas like RDMA, topologies, congestion control
- Experience debugging network issues in large scale deployments
- Familiarity with CUDA programming and/or GPUs
- Experience with Deep Learning Frameworks such PyTorch, TensorFlow
JR2001201
Key Skills
Ranked by relevanceReady to apply?
Join NVIDIA and take your career to the next level!
Application takes less than 5 minutes

