SeaWulf is a high-performance computing (HPC) cluster built with components from AMD, Dell, HPE, IBM, Intel, Nvidia, and other leading technology providers. Its name combines "Seawolf," the university mascot, and "Beowulf," one of the first commodity-based HPC clusters.
System Capacity: 400+ nodes • 23,000 cores • ~1.86 PFLOP/s peak performance • Multi-petabyte storage • GPU acceleration
Supported Research Areas
SeaWulf supports research across Stony Brook University and partner institutions. Examples of research areas include:
- Physics: Simulations, modeling, computational research
- Climate Science: Weather and climate modeling, atmospheric simulations
- Bioinformatics: Genomics, molecular dynamics, protein structure prediction
- Materials Science: Computational materials research, property calculations
- Machine Learning & AI: Deep learning, neural network research, AI inference
- Engineering: CFD, structural analysis, optimization
- Data Science: Large-scale data analysis, statistical computing, data mining
These examples illustrate the range of research we support; other computational projects are also welcome.
SeaWulf provides a wide range of software for computational research, managed efficiently through the module system. See Using Modules for details, and consult our Software Catalog for a current list of installed tools.
Getting Started with SeaWulf
Access to SeaWulf is organized by project numbers. Once a project exists, accounts can be created for all collaborators, including students and external partners. For full details, see Requesting Access and explore the SeaWulf Documentation portal.
Start with the Quick Start Guide for a step-by-step introduction to connecting, submitting jobs, and accessing results.
SeaWulf Computational Resources
- CPU Nodes: 28–96 cores for serial, threaded, and MPI parallel applications
- GPU Nodes: NVIDIA K80, P100, V100, A100 for AI, deep learning, and accelerated computing
- Memory-Intensive Nodes: High-bandwidth memory and large-memory systems up to 3TB
- Parallel Computing: Distributed applications across multiple nodes with low-latency networking
For full hardware specifications and node details, see SeaWulf Architecture Overview.
Getting Help
If you have questions regarding the SeaWulf cluster, submit a ticket to our HPC support team. We are happy to assist. See When to Ask for Help for more information.
