Complete nvidia-smi commands cheat sheet for GPU monitoring, management, and diagnostics on Linux. Essential reference for ML engineers, data scientists, and GPU server administrators.
Search all nvidia-smi (GPU) shortcuts interactively
Interactive Shortcut Finder| Shortcut | Action | Description |
|---|---|---|
| nvidia-smi | GPU status summary | Show GPU utilization, temperature, memory, and processes. |
| nvidia-smi -l 1 | Monitor every 1s | Refresh GPU status every 1 second. |
| watch -n 1 nvidia-smi | Real-time monitor | Real-time GPU monitoring with watch command. |
| nvidia-smi -q | Detailed info | Show all detailed GPU information. |
| nvidia-smi -L | List GPUs | List all GPUs with UUIDs. |
| nvidia-smi pmon | Process monitor | Monitor per-process GPU usage. |
| nvidia-smi dmon | Device monitor | Monitor device metrics every second. |
| nvidia-smi topo -m | Topology | Show GPU NVLink/PCIe topology. |
| nvidia-smi nvlink -s | NVLink status | Check NVLink connection status and bandwidth. |
| Shortcut | Action | Description |
|---|---|---|
| nvidia-smi -pm 1 | Persistence Mode ON | Keep driver loaded to reduce GPU init latency. |
| nvidia-smi -i 0 -pl 250 | Set power limit | Limit GPU 0 max power to 250W. |
| nvidia-smi -i 0 -ac 1215,1410 | Set clocks | Set memory/graphics clock speeds. |
| nvidia-smi -rgc | Reset clocks | Reset GPU clocks to default. |
| nvidia-smi -r -i 0 | Reset GPU | Reset GPU 0 (resolves ECC errors etc). |
| nvidia-smi -e 1 | Enable ECC | Enable ECC memory correction. |
| nvidia-smi -q -d POWER | Power details | Show detailed GPU power information. |
| Shortcut | Action | Description |
|---|---|---|
| nvidia-smi --query-gpu=name,memory.total,memory.used --format=csv | CSV query | Output GPU info in CSV format. |
| nvidia-smi --query-gpu=utilization.gpu,temperature.gpu --format=csv -l 5 | 5s CSV logging | Log utilization and temp every 5 seconds. |
| nvidia-smi --query-compute-apps=pid,used_memory --format=csv | Per-process memory | Show memory usage per GPU process. |
Run 'nvidia-smi' to see a summary of all GPUs including utilization, temperature, memory usage, and running processes.
Use 'watch -n 1 nvidia-smi' or 'nvidia-smi -l 1' to refresh GPU status every second.
Run 'nvidia-smi --query-compute-apps=pid,used_memory --format=csv' to see memory usage per process.
Run 'nvidia-smi -pm 1' as root to keep the NVIDIA driver loaded, reducing GPU initialization latency.
Use 'nvidia-smi topo -m' to see GPU interconnect topology including NVLink and PCIe connections.
Browse shortcuts for 230 platforms
Explore All