Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 781 150

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 424 73

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.8k 1.6k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 243

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.2k 497

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.9k 1k

Repositories

Showing 10 of 704 repositories
  • cuda-quantum Public

    C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows

    NVIDIA/cuda-quantum’s past year of commit activity
    C++ 967 Apache-2.0 352 431 (16 issues need help) 106 Updated Mar 24, 2026
  • Megatron-LM Public

    Ongoing research training transformer models at scale

    NVIDIA/Megatron-LM’s past year of commit activity
    Python 15,781 3,744 329 (1 issue needs help) 342 Updated Mar 24, 2026
  • cloudai Public

    CloudAI Benchmark Framework

    NVIDIA/cloudai’s past year of commit activity
    Python 89 Apache-2.0 43 8 4 Updated Mar 24, 2026
  • NeMo-Retriever Public

    NeMo Retriever Library is a scalable, performance-oriented document content and metadata extraction microservice. NeMo Retriever extraction uses specialized NVIDIA NIM microservices to find, contextualize, and extract text, tables, charts and images that you can use in downstream generative applications.

    NVIDIA/NeMo-Retriever’s past year of commit activity
    Python 2,885 Apache-2.0 309 117 (1 issue needs help) 69 Updated Mar 24, 2026
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 2,233 Apache-2.0 314 69 117 Updated Mar 24, 2026
  • aistore Public

    AIStore: scalable storage for AI applications

    NVIDIA/aistore’s past year of commit activity
    Go 1,796 MIT 243 0 0 Updated Mar 24, 2026
  • IsaacTeleop Public

    The unified framework for sim & real robot teleoperation

    NVIDIA/IsaacTeleop’s past year of commit activity
    Python 66 Apache-2.0 7 17 19 Updated Mar 24, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 13,180 2,212 557 619 Updated Mar 24, 2026
  • cccl Public

    CUDA Core Compute Libraries

    NVIDIA/cccl’s past year of commit activity
    C++ 2,239 367 1,282 (6 issues need help) 230 Updated Mar 24, 2026
  • k8s-nim-operator Public

    An Operator for deployment and maintenance of NVIDIA NIMs and NeMo microservices in a Kubernetes environment.

    NVIDIA/k8s-nim-operator’s past year of commit activity
    Go 152 Apache-2.0 43 8 12 Updated Mar 24, 2026