Announcements

September 4, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, October 14th. The maintenance is expected to conclude by lunch time on Tuesday, October 15th.

We thank you for your patience while these necessary tests are conducted.

August 2, 2024

Due to a momentary power outage in the Laufer Center building, the HBM xeon max nodes briefly lost power.  They are currently in the process of being rebooted.  Jobs that were interrupted will need to be resubmitted.  We apologize for this disruption and thank you for your patience.

Update: all HBM nodes have been rebooted and are available for jobs.  The 28-core nodes also briefly lost power and are being rebooted.  Once again, we thank you for your patience while we work to resolve these issues.

June 27, 2024

The gcc-stack and intel-stack modules have been updated to provide the latest available compiler and MPI releases. For gcc-stack this is GCC 13.2.0 and Mvapich 2.3.7.  For Intel, this is the the oneAPI 24.2 release.

The compilers and MPI previously provided by gcc-stack and intel-stack are still available as individual modules for those who may prefer to use them.

May 31, 2024

We are excited to announce the publication of an article showcasing our team's work on the performance of the new Sapphire Rapids CPUs available in Seawulf. The article also delves into the significant influence of high bandwidth memory on computational efficiency. This insightful analysis is essential reading for those interested in cutting-edge advancements in high-performance computing and especially in using the new Sapphire Rapids nodes.

Read the article here.

May 30, 2024

In order to provide updated libraries and the latest functionality, the anaconda/3 module has been updated to the latest version.  We recommend using this new version in most cases, but if you require the old version of the anaconda/3 module it is available under the name anaconda/3-old.

February 15, 2024

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Wednesday 12 - 1 pm
  • Friday 11 am - 12 pm

If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.

February 15, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-coreTesla k80 gpuTesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, March 11th. The maintenance is expected to conclude by lunch time on Tuesday, March 12th.

We thank you for your patience while these necessary tests are conducted.

February 13, 2024

Four of our Intel Sapphire Rapids nodes have been updated to include 1 TB of DDR5 memory each. In order to simplify the user experience, the high bandwidth memory on these nodes has also been reconfigured from main memory to level 4 cache.  These nodes are now accessible via the hbm-1tb-long-96core queue. For more information, please see our full list of SeaWulf queues.

February 07, 2024

UPDATE: the authentication issue described below has been resolved, and new connections to SeaWulf are no longer failing.

The campus Identity server, which we rely on for authenticating access to SeaWulf, is currently experiencing issues, resulting in new connections to our HPC environment failing.

September 14, 2023

We will be performing upgrades & maintenance on the SeaWulf Storage on Monday October 9th starting at 9:00 AM. During this maintenance window, all Seawulf login nodes and queues, as well as the storage, will NOT be available. The SeaWulf cluster is scheduled to return to normal operation by the end of business on Tuesday October 10th.

We thank you for your patience while these necessary upgrades are completed.