Skip to content
@NVIDIA

NVIDIA Corporation

Pinned Loading

  1. cuopt cuopt Public

    GPU accelerated decision optimization

    Cuda 826 163

  2. cuopt-examples cuopt-examples Public

    NVIDIA cuOpt examples for decision optimization

    Jupyter Notebook 435 74

  3. open-gpu-kernel-modules open-gpu-kernel-modules Public

    NVIDIA Linux open GPU kernel module source

    C 16.9k 1.7k

  4. aistore aistore Public

    AIStore: scalable storage for AI applications

    Go 1.8k 246

  5. nvidia-container-toolkit nvidia-container-toolkit Public

    Build and run containers leveraging NVIDIA GPUs

    Go 4.3k 510

  6. GenerativeAIExamples GenerativeAIExamples Public

    Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.

    Jupyter Notebook 3.9k 1k

Repositories

Showing 10 of 710 repositories
  • Model-Optimizer Public

    A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment frameworks like TensorRT-LLM, TensorRT, vLLM, etc. to optimize inference speed.

    NVIDIA/Model-Optimizer’s past year of commit activity
    Python 2,558 Apache-2.0 366 57 135 Updated Apr 23, 2026
  • cccl Public

    CUDA Core Compute Libraries

    NVIDIA/cccl’s past year of commit activity
    C++ 2,290 379 1,327 (6 issues need help) 250 Updated Apr 23, 2026
  • gpu-operator Public

    NVIDIA GPU Operator creates, configures, and manages GPUs in Kubernetes

    NVIDIA/gpu-operator’s past year of commit activity
    Go 2,658 Apache-2.0 491 73 (7 issues need help) 45 Updated Apr 23, 2026
  • NemoClaw Public

    Run OpenClaw more securely inside NVIDIA OpenShell with managed inference

    NVIDIA/NemoClaw’s past year of commit activity
    TypeScript 19,697 Apache-2.0 2,465 235 (1 issue needs help) 169 Updated Apr 23, 2026
  • TensorRT-LLM Public

    TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT LLM also contains components to create Python and C++ runtimes that orchestrate the inference execution in a performant way.

    NVIDIA/TensorRT-LLM’s past year of commit activity
    Python 13,460 2,316 589 777 Updated Apr 23, 2026
  • fleet-intelligence-agent Public

    NVIDIA Fleet Intelligence Agent - Host agent for GPU telemetry collection and attestation

    NVIDIA/fleet-intelligence-agent’s past year of commit activity
    Go 15 Apache-2.0 2 0 3 Updated Apr 23, 2026
  • OpenShell Public

    OpenShell is the safe, private runtime for autonomous AI agents.

    NVIDIA/OpenShell’s past year of commit activity
    Rust 5,278 Apache-2.0 580 57 19 Updated Apr 23, 2026
  • ncx-infra-controller-rest Public

    NCX Infra Controller - Hardware Lifecycle Management (REST API)

    NVIDIA/ncx-infra-controller-rest’s past year of commit activity
    Go 34 Apache-2.0 32 31 13 Updated Apr 23, 2026
  • ncx-infra-controller-core Public

    NCX Infra Controller - Hardware Lifecycle Management and multitenant networking

    NVIDIA/ncx-infra-controller-core’s past year of commit activity
    Rust 128 Apache-2.0 85 173 (5 issues need help) 60 Updated Apr 23, 2026
  • OSMO Public

    The developer-first platform for scaling complex Physical AI workloads across heterogeneous compute—unifying training GPUs, simulation clusters, and edge devices in a simple YAML

    NVIDIA/OSMO’s past year of commit activity
    TypeScript 149 Apache-2.0 35 62 18 Updated Apr 23, 2026

Top languages

Loading…

Most used topics

Loading…