SeaWulf Cluster Overview

SeaWulf Cluster Overview

Overview

SeaWulf is Stony Brook University's flagship high-performance computing (HPC) cluster, representing an advanced computational resource designed to accelerate research across diverse scientific disciplines. The system's name blends "Seawolf" (Stony Brook's mascot) with "Beowulf," paying homage to one of the earliest HPC cluster architectures.

Built with state-of-the-art components from leading technology partners including AMD, Dell, HPE, IBM, Intel, and Nvidia, SeaWulf serves the campus research community as well as industrial partners. The cluster is housed in Stony Brook's Computing Center and represents a significant investment in research infrastructure.

Recent Milestone: In 2023, Stony Brook became the first academic institution worldwide to deploy Intel Sapphire Rapids processors with high-bandwidth memory (HBM) on SeaWulf.

Technical Specifications

Core Architecture

Component Specification Details
Total Nodes 400+ Heterogeneous mix of compute, GPU, and high-memory nodes
Total Cores ~23,000 Distributed across multiple node types
Peak Performance 1.86 petaFLOPS Combined CPU and GPU computational power
Interconnect InfiniBand 5-50 GB/s data transfer speeds
Storage GPFS Array Hybrid spinning disks and SSDs

Node Types

  • 40-core nodes: Standard compute nodes for general-purpose workloads
  • 96-core nodes: High-core-count nodes for parallel processing
  • GPU nodes: Equipped with 4x K80 24GB GPUs each for accelerated computing
  • High-bandwidth memory (HBM) nodes: Intel Xeon CPU Max Series with HBM
  • High-memory nodes: Up to 1TB DDR5 memory for memory-intensive applications

Recent Upgrades (2023)

The recent $1.6 million upgrade introduced:

  • HPE ProLiant DL360 Gen11 servers
  • Intel Xeon CPU Max Series processors with high-bandwidth memory
  • Enhanced memory bandwidth resulting in 2-4x faster application performance
  • Four nodes with 1TB DDR5 memory configured in cache mode

Key Capabilities

High-Speed Networking

SeaWulf's nodes are interconnected via high-speed InfiniBand networks by Nvidia, enabling high data transfer rates ranging from 5 to 50 gigabytes per second. This high-performance networking infrastructure ensures efficient communication between nodes for parallel computing workloads.

Heterogeneous Computing

The cluster supports both CPU and GPU computing paradigms, making it suitable for a wide range of computational approaches including traditional HPC workloads, machine learning, and AI applications.

Memory Architecture

With the integration of Intel Xeon CPU Max Series processors featuring high-bandwidth memory, SeaWulf offers significant advantages for memory-intensive applications. The HBM technology dramatically improves data movement between memory and processors.

Research Applications

SeaWulf supports a diverse array of research disciplines and computational methodologies:

  • Computational Physics: Large-scale simulations and modeling
  • Climate Science: Weather prediction and climate modeling
  • Bioinformatics: Genomics, proteomics, and molecular dynamics
  • Materials Science: Quantum mechanical calculations and materials discovery
  • Machine Learning & AI: Deep learning model training and inference
  • Engineering: Computational fluid dynamics and structural analysis
  • Data Science: Big data analytics and statistical computing

Access and Partnerships

SeaWulf serves both the Stony Brook University research community and external industrial partners, fostering collaboration between academia and industry. The system is available to researchers across all disciplines who require high-performance computing resources for their work.

Note: Access to SeaWulf requires proper account setup and approval through Stony Brook University's IACS Ticketing System.