Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 713 125

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 410 65

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.7k 1.6k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 235

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.1k 479

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.8k 980

Repositories

Showing 10 of 674 repositories
  • cuda-quantum Public

    C++ and Python support for the CUDA Quantum programming model for heterogeneous quantum-classical workflows

    NVIDIA/cuda-quantum’s past year of commit activity
    C++ 934 341 429 (16 issues need help) 107 Updated Feb 19, 2026
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 2,008 Apache-2.0 280 68 90 Updated Feb 19, 2026
  • stdexec Public

    `std::execution`, the proposed C++ framework for asynchronous and parallel programming.

    NVIDIA/stdexec’s past year of commit activity
    C++ 2,246 Apache-2.0 228 123 14 Updated Feb 19, 2026
  • cccl Public

    CUDA Core Compute Libraries

    NVIDIA/cccl’s past year of commit activity
    C++ 2,178 345 1,234 (6 issues need help) 213 Updated Feb 19, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 12,913 2,115 541 526 Updated Feb 20, 2026
  • Megatron-LM Public

    Ongoing research training transformer models at scale

    NVIDIA/Megatron-LM’s past year of commit activity
    Python 15,229 3,607 295 (1 issue needs help) 312 Updated Feb 20, 2026
  • physicsnemo Public

    Open-source deep-learning framework for building, training, and fine-tuning deep learning models using state-of-the-art Physics-ML methods

    NVIDIA/physicsnemo’s past year of commit activity
    Python 2,445 Apache-2.0 581 20 29 Updated Feb 20, 2026
  • nv-redfish Public

    NVIDIA's Redfish next generation redfish crate

    NVIDIA/nv-redfish’s past year of commit activity
    Rust 13 Apache-2.0 2 1 2 Updated Feb 20, 2026
  • cuda-python Public

    CUDA Python: Performance meets Productivity

    NVIDIA/cuda-python’s past year of commit activity
    Cython 3,169 247 214 26 Updated Feb 20, 2026
  • warp Public

    A Python framework for accelerated simulation, data generation and spatial computing.

    NVIDIA/warp’s past year of commit activity
    Python 6,232 Apache-2.0 443 183 5 Updated Feb 20, 2026