NVwulf Queues Table
| Partition Name | GPU Type | GPUs/Node | CPUs/Node | Host Memory | GPU Memory | Max Runtime | Max Nodes | Multitenant |
|---|---|---|---|---|---|---|---|---|
| debug-h200x4 | H200 SXM5 | 4 | 64 (max 2) | ~750 GB | 141 GB / GPU | 1h | 1 | Yes |
| debug-b40x4 | RTX PRO 6000 Blackwell (B40) | 4 | 64 (max 2) | ~512 GB | 96 GB / GPU | 1h | 1 | Yes |
| h200x4 | H200 SXM5 | 4 | 64 (max 62) | ~750 GB | 141 GB / GPU | 8h | 2 | Yes |
| h200x4-long | H200 SXM5 | 4 | 64 (max 62) | ~750 GB | 141 GB / GPU | 48h | 1 | Yes |
| h200x8 | H200 SXM5 | 8 | 64 (max 62) | ~700 GB | 141 GB / GPU | 8h | 1 | Yes |
| h200x8-long | H200 SXM5 | 8 | 64 (max 62) | ~700 GB | 141 GB / GPU | 48h | 1 | Yes |
| b40x4 | RTX PRO 6000 Blackwell (B40) | 4 | 64 (max 62) | ~512 GB | 96 GB / GPU | 8h | 2 | Yes |
| b40x4-long | RTX PRO 6000 Blackwell (B40) | 4 | 64 (max 62) | ~512 GB | 96 GB / GPU | 48h | 1 | Yes |
No partitions match your search criteria. Try adjusting your filters.
Note: CPU and memory values are listed per node. All NVwulf GPU compute nodes support multitenancy — multiple users can run jobs on the same node simultaneously. Be explicit about the resources (GPUs, CPUs, host memory) you need when submitting jobs.
Default resource allocation: If you do not specify CPUs or memory, Slurm will assign defaults based on the number of GPUs requested (see partition defaults via scontrol show partition <name>).
GPU Memory: H200 SXM5 GPUs provide 141 GB HBM3e per GPU. B40 (RTX PRO 6000 Blackwell) GPUs provide 96 GB GDDR7 per GPU.
