In order to apply necessary security updates, SeaWulf's Open OnDemand portal will be taken offline for maintenance on Tuesday, February 17th starting at 1 pm. During this maintenance, users will not be able to log into the Open OnDemand portal or use any of the apps. Maintenance is expected to conclude by the end of business on the same day.
This maintenance will not affect other SeaWulf nodes or prevent job submission outside of the OnDemand interface. During the maintenance window, NVwulf's Open OnDemand portal will be unaffected and available for use.
We thank you for your patience while we make these necessary updates.
Announcements
Two new queues have been added to NVwulf:
debug-h200x4
debug-b40x4
These "debug" queues have higher priority but lower resource limits (max of 1 GPU, 2 CPU cores, and 1 hour walltime) than other NVwulf queues. They are designed to quickly facilitate short test runs and interactive code troubleshooting prior to submission of larger jobs.
Please report any issues or questions to our ticketing system.
NVwulf has been expanded with the addition of three new compute nodes, each with 4 RTX PRO 6000 Blackwell edition ("B40") GPUs, two 32-Core Intel Intel(R) Xeon(R) 6530P processors (64 cores per node), and 512 GB of DDR5 system memory.
These nodes may be accessed via the b40x4 and b40x4-long queues.
SeaWulf virtual office hours have ended for the semester. Please stay tuned for a future announcement regarding virtual office hours for next semester.
In order to provide a more robust and higher performing storage platform, we will be performing upgrades to the SeaWulf cluster starting 9 AM on Monday, November 17th and concluding by the end of business on Tuesday, November 18th.
During this maintenance window, all login nodes, compute nodes and queues on SeaWulf will be off-line. We thank you for your patience while these necessary upgrades are completed.
In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, November 3rd. The maintenance is expected to conclude by lunch time on Tuesday, November 4th.
The 40-core, 96-core, a100 gpu queues will NOT be impacted by this maintenance. Similarly, the Milan1 and Milan2 login nodes will continue to be available.
We thank you for your patience while these necessary tests are conducted.
To ensure stable system performance, we have changed the node configuration on NVwulf to reserve two cores for system processes. Please ensure that job allocations request no more than 62 cores per node. Requests for more than 62 cores per node will result in an error stating, "Unable to allocate resources: Requested node configuration is not available."
SeaWulf virtual office hours are resuming for the Fall semester!
Members of our HPC Support team will be holding virtual office hours at the following times:
- Tuesday 2 - 3 pm
- Friday 2 - 3 pm
If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.
We have just been notified of scheduled electrical maintenance that will be performed on the circuits feeding the Ookami and NVwulf clusters Wednesday, September 10th. In anticipation of this maintenance the clusters will be going off-line starting at 5:00 PM on Tuesday, September 9th. We anticipate the system to be back up on the afternoon of Wednesday, September 10th, pending timely completion of the electrical maintenance.
During this maintenance window all login nodes, compute nodes, and the storage will NOT be accessible.
We thank you for your patience while these necessary maintenance steps are performed.
