NVwulf is Stony Brook University's dedicated GPU computing cluster, designed to accelerate artificial intelligence, machine learning, and data-intensive scientific research. It functions as a sister system to the general-purpose SeaWulf cluster, and is part of a multi-year initiative supported by a $5 million New York State grant to establish Stony Brook as a national hub for interdisciplinary AI research.
Current Capacity: H200 and B40 GPU nodes • Up to 8 GPUs per node • Multi-tenant scheduling • Accessible via SSH and Open OnDemand
Supported Research Areas
NVwulf is available to researchers across Stony Brook University and partner institutions. It is particularly well-suited for workloads that benefit from large GPU memory and high-bandwidth interconnects. Examples of supported research areas include:
- Artificial Intelligence & Machine Learning: Large model training, fine-tuning, inference, and neural architecture research
- Biomedical Imaging: Deep learning-based image analysis, segmentation, and AI-driven diagnostics
- Molecular Modeling: GPU-accelerated molecular dynamics and computational chemistry
- Natural Language Processing: Training and deploying large language models and transformers
- Scientific Computing: High-throughput FP64 workloads requiring GPU acceleration
- Data Science: Large-scale data analysis and GPU-accelerated statistical computing
These examples illustrate the range of research we support; other GPU-accelerated projects are also welcome.
Getting Started with NVwulf
NVwulf accounts are available to Stony Brook researchers with an active SeaWulf project. To request access, see Getting Access to NVwulf. Once your account is approved, consult the Getting Started on NVwulf guide for step-by-step instructions on connecting, loading modules, and submitting your first GPU job.
NVwulf is also accessible through Open OnDemand, a web-based interface that provides interactive apps including Jupyter, VS Code, and a full Linux desktop.
NVwulf Computational Resources
- H200 Nodes (4-way): 4× NVIDIA H200 SXM5 GPUs (141 GB HBM3e each), 64 CPUs, ~750 GB host memory
- H200 Nodes (8-way): 8× NVIDIA H200 SXM5 GPUs (141 GB HBM3e each), 64 CPUs, ~1500 GB host memory
- B40 Nodes (4-way): 4× NVIDIA RTX PRO 6000 Blackwell GPUs (96 GB GDDR7 each), 64 CPUs, ~512 GB host memory
- Multi-tenant Scheduling: All nodes support multiple simultaneous users.
For full partition specifications and runtime limits, see the NVwulf Queues Table.
Getting Help
If you have questions about NVwulf, submit a ticket to our HPC support team. When submitting, please indicate that your question is about NVwulf. We are happy to assist.
