Activity of NVIDIA/gpu-operator repository

Barely warm 〽️

Contribution activity is decreasing

Activity badge for NVIDIA/gpu-operator repository

Why NVIDIA/gpu-operator is barely warm?

The result is based on ratio of number of commits and code additions from initial and final time ranges.

Initial time range – from 5 Jul, 2023 to 5 Oct, 2023

Final time range – from 5 Apr, 2024 to 5 Jul, 2024

From 13 to 6 commits per week
-54%
From 21325 to 7517 additions per week
-65%
From 1326 to 8723 deletions per week
558%
Data calculated on 5 Jul, 2024

Bus factor

What is Bus factor?

It is basically a number of most active contributors responsible for 80% of contributions.

Bus factor tries to assess "What happens if a key member of the team is hit by a bus?". The more there are key members, the lower the risk.

The NVIDIA/gpu-operator repository has a bus factor of 4.

Medium risk, some knowledge concentrated in a few people

Bus factor was measured on 14 Aug 2024

4

Summary of NVIDIA/gpu-operator

The NVIDIA/gpu-operator is a GitHub repository maintained by NVIDIA. It is designed to enable NVIDIA GPUs within OpenShift clusters.

The GPU Operator opens up the door to easily deploy Nvidia graphics cards on any Kubernetes cluster, on any cloud.

The repository provides Kubernetes custom resources that automate the deployment, configuration, and monitoring of all GPU software components to turn a mixed cluster of CPU nodes and GPU nodes into a GPU-enabled Kubernetes cluster.

The GPU Operator is based on the Operator Framework in Kubernetes. An operator in Kubernetes is a method of packaging, deploying, and managing a Kubernetes application.

Key Features:

  • Automatic NVIDIA driver deployment on nodes with an NVIDIA GPU.
  • Automatic Kubernetes device plugin deployment on nodes with an NVIDIA GPU.
  • Automatic NVIDIA DCGM exporter deployment on nodes with an NVIDIA GPU.
  • Automatic NVIDIA Node Feature Discovery (NFD) deployment.
  • Validation of the GPU Operator deployment by running GPU workloads in a validation pod.
  • Automatic GPU feature discovery deployment.

Each of these components can be individually enabled or disabled, providing users with control over their deployment pipeline.

Usage: To use it, you first need to install an operator from the OperatorHub. Once the operator is running, you then create a new CRD (Custom Resource Definition) named 'ClusterPolicy'. This CRD includes all the configuration options such as enabling or disabling certain components. Once the 'ClusterPolicy' is created, the operator will deploy and configure the Nvidia software based on the provided configuration.

Recently added projects

Activity badge for tailwindlabs/tailwindcss repository

Updated on 14 Aug 2024

Activity badge for toptal/picasso repository

Updated on 14 Aug 2024

Activity badge for facebook/react repository

Updated on 14 Aug 2024

Activity badge for ixartz/Next-js-Boilerplate repository

Updated on 14 Aug 2024

Activity badge for asdf-vm/asdf-nodejs repository

Updated on 14 Aug 2024

Activity badge for LouisShark/chatgpt_system_prompt repository

Updated on 14 Aug 2024

Top 5 contributors

ContributorCommits
317
241
179
56
40