Announcements

February 6, 2026

HPC Support virtual office hours are resuming for the Spring 2026 semester!

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Monday 1 - 2 pm
  • Friday 1 - 2 pm

If you have questions or need assistance troubleshooting a SeaWulf or NVwulf problem, you may use this link to attend the office hours during either of the above times slots.

January 27, 2026

Two new queues have been added to NVwulf:

debug-h200x4
debug-b40x4

These "debug" queues have higher priority but lower resource limits (max of 1 GPU,  2 CPU cores, and 1 hour walltime) than other NVwulf queues.  They are designed to quickly facilitate short test runs and interactive code troubleshooting prior to submission of larger jobs. 

Please report any issues or questions to our ticketing system.

January 21, 2026

NVwulf has been expanded with the addition of three new compute nodes, each with 4 RTX PRO 6000 Blackwell edition ("B40") GPUs, two 32-Core Intel Intel(R) Xeon(R) 6530P processors (64 cores per node), and 512 GB of DDR5 system memory.

These nodes may be accessed via the b40x4 and b40x4-long queues.

December 18, 2025

SeaWulf virtual office hours have ended for the semester. Please stay tuned for a future announcement regarding virtual office hours for next semester.

November 4, 2025

In order to provide a more robust and higher performing storage platform, we will be performing upgrades to the SeaWulf cluster starting 9 AM on Monday, November 17th and concluding by the end of business on Tuesday, November 18th.

During this maintenance window, all login nodes, compute nodes and queues on SeaWulf will be off-line. We thank you for your patience while these necessary upgrades are completed.

October 17, 2025

In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, November 3rd. The maintenance is expected to conclude by lunch time on Tuesday, November 4th.

The 40-core, 96-core, a100 gpu queues will NOT be impacted by this maintenance. Similarly, the Milan1 and Milan2 login nodes will continue to be available.

We thank you for your patience while these necessary tests are conducted.

October 6, 2025

To ensure stable system performance, we have changed the node configuration on NVwulf to reserve two cores for system processes. Please ensure that job allocations request no more than 62 cores per node.  Requests for more than 62 cores per node will result in an error stating, "Unable to allocate resources: Requested node configuration is not available."

September 4, 2025

SeaWulf virtual office hours are resuming for the Fall semester!

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Tuesday 2 - 3 pm
  • Friday 2 - 3 pm

If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.

August 27, 2025

We have just been notified of scheduled electrical maintenance that will be performed on the circuits feeding the Ookami and NVwulf clusters Wednesday, September 10th. In anticipation of this maintenance the clusters will be going off-line starting at 5:00 PM on Tuesday, September 9thWe anticipate the system to be back up on the afternoon of Wednesday, September 10th, pending timely completion of the electrical maintenance.

During this maintenance window all login nodes, compute nodes, and the storage will NOT be accessible. 

We thank you for your patience while these necessary maintenance steps are performed.

June 25, 2025

Due to recent high temperatures, a portion of the SeaWulf compute nodes has been taken offline to prevent data center overheating. Most affected jobs have been requeued, but we recommend checking your job status and resubmitting if needed. The compute nodes will be brought back online over the next few days, as weather permits.  All login nodes remain available.

Thank you for your understanding as we take these necessary precautions to protect the system.