SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners. Its name is a portmanteau of "Seawolf" and "Beowulf," the name of one of the first high performance computing clusters.
- Over 400 nodes and ~23,000 cores, with peak performance of ~1.86 PFLOP/s* available for research computation.
- 164 compute nodes from Penguin Computing, each with two Intel Xeon E5-2683v3 CPUs on a FDR InfiniBand network.
- CPUs are codenamed “Haswell", offering 14 cores each, and operating at a base speed of 2.0 Gigahertz.
- 8 of these compute nodes contain GPUs.
- Total of 28 x Nvidia Tesla K80 24GB Accelerators, offering 64x GK210 (K40) Cores (159,744 CUDA cores).
- One node with 2 Tesla P100 GPUs and one node with 2 Tesla V100 GPUs are also accessible.
- Each node has 128 Gigabytes (~20 GB reserved for the system) of DDR4 Memory, configured as 8 memory modules, each operating at 2,133 Mega-Transfers, for a combined memory bandwidth per node of 133.25 Gigabytes** per second.
- 64 compute nodes from Dell each with two Intel Xeon Gold 6148 CPUs (codenamed "Skylake") with 20 cores each that operate at a base speed of 2.4 Gigahertz and 192 Gigabytes of RAM on a FDR InfiniBand network.
- 48 compute nodes from HPE each with two AMD 7643 CPUs (codenamed "Milan") with 48 cores each that operate at a base speed of 3.2 Gigahertz and 256 Gigabytes of RAM on a HDR100 InfiniBand network.
- 11 GPU compute nodes from Dell each with 4x A100 80GB GPUs and two Intel 6338 CPUs (codenamed "Ice Lake") with 32 cores each that operate at a base speed of 2.0 Gigahertz and 256 Gigabytes of RAM on a HDR100 InfiniBand network.
- 94 compute nodes from HPE each with two Intel Xeon Max 9468 CPUs (codenamed "Sapphire Rapids w/ HBM") with 48 cores each, and operate at a base speed of 2.6 Gigahertz and 256 GB of DDR5 RAM + 128 GB of HBM2 RAM on a NDR InfiniBand network.
- The system also highlights two large memory nodes w/ 3 TB of RAM:
- A DDR4 system with 4 Intel E7-8870v3 processors with 18 cores each operating at 2.1 Gigahertz, for a total of 72 cores and 144 threads (via Hyper-Threading).
- A DDR5 system with 4 Intel 8360H processors with 24 cores each operating at 3.0 Gigahertz, for a total of 96 cores.
- Additionally, the system also includes five login nodes.
- 164 compute nodes from Penguin Computing, each with two Intel Xeon E5-2683v3 CPUs on a FDR InfiniBand network.
- The nodes are interconnected via a high-speed InfiniBand®(IB) networks by Nvidia ®, allowing transfer speeds between 5-50 Gigabytes of data each second.
- The Storage array is a GPFS storage solution with ~4 Petabyte of SAS spinning disk & ~50 Terabyte of SSD. The SSD portion of this storage system can provide sustained 4k Random Read Input Output Operations per Second (IOPS) over 4,000,000, and can sustain sequential reads at over 36 Gigabytes per second.
* Cores per node x # nodes x base clock per core x Double Precision FLOPS per cycle + Nvidia GPUs Teraflops.
** Memory Clock of 1,066 MegaHertz x 2 (Double Data Rate) x 64 bit Memory Bus width x 4 Memory interfaces per CPU (Quad-channel) x 2 CPUs per Node/ 8 bits per byte.