NVIDIA logo

[Remote] Principal Software Engineer – Large-Scale LLM Memory and Storage Systems

NVIDIA

Share this job:

Note: The job is a remote job and is open to candidates in USA. NVIDIA is a leading technology company known for its innovative solutions in AI and machine learning. They are seeking a Principal Systems Engineer to define the vision and roadmap for memory management of large-scale LLM and storage systems, focusing on designing and implementing high-performance memory solutions for AI applications.

Responsibilities

  • Design and evolve a unified memory layer that spans GPU memory, pinned host memory, RDMA-accessible memory, SSD tiers, and remote file/object/cloud storage to support large-scale LLM inference
  • Architect and implement deep integrations with leading LLM serving engines (such as vLLM, SGLang, TensorRT-LLM), with a focus on KV-cache offload, reuse, and remote sharing across heterogeneous and disaggregated clusters
  • Co-design interfaces and protocols that enable disaggregated prefill, peer-to-peer KV-cache sharing, and multi-tier KV-cache storage (GPU, CPU, local disk, and remote memory) for high-throughput, low-latency inference
  • Partner closely with GPU architecture, networking, and platform teams to exploit GPUDirect, RDMA, NVLink, and similar technologies for low-latency KV-cache access and sharing across heterogeneous accelerators and memory pools
  • Mentor senior and junior engineers, set technical direction for memory and storage subsystems, and represent the team in internal reviews and external forums (open source, conferences, and customer-facing technical deep dives)

Skills

  • Masters or PhD or equivalent experience
  • 15+ years of experience building large-scale distributed systems, high-performance storage, or ML systems infrastructure in C/C++ and Python, with a track record of delivering production services
  • Deep understanding of memory hierarchies (GPU HBM, host DRAM, SSD, and remote/object storage) and experience designing systems that span multiple tiers for performance and cost efficiency
  • Distributed caching or key-value systems, especially designs optimized for low latency and high concurrency
  • Hands-on experience with networked I/O and RDMA/NVMe-oF/NVLink-style technologies, and familiarity with concepts like disaggregated and aggregated deployments for AI clusters
  • Strong skills in profiling and optimizing systems across CPU, GPU, memory, and network, using metrics to drive architectural decisions and validate improvements in TTFT and throughput
  • Excellent communication skills and prior experience leading cross-functional efforts with research, product, and customer teams
  • Prior contributions to open-source LLM serving or systems projects focused on KV-cache optimization, compression, streaming, or reuse
  • Experience designing unified memory or storage layers that expose a single logical KV or object model across GPU, host, SSD, and cloud tiers, especially in enterprise or hyperscale environments
  • Publications or patents in areas such as LLM systems, memory-disaggregated architectures, RDMA/NVLink-based data planes, or KV-cache/CDN-like systems for ML

Benefits

  • Equity
  • Benefits

Company Overview

  • NVIDIA is a computing platform company operating at the intersection of graphics, HPC, and AI. It was founded in 1993, and is headquartered in Santa Clara, California, USA, with a workforce of 10001+ employees. Its website is https://www.nvidia.com.

Company H1B Sponsorship

  • NVIDIA has a track record of offering H1B sponsorships, with 1418 in 2025, 1356 in 2024, 976 in 2023, 835 in 2022, 601 in 2021, 529 in 2020. Please note that this does not guarantee sponsorship for this specific role.

Job Type

Job Type
Full Time
Location
United States

Share this job: