November 8, 2024
As of December 23rd, 2024, we will no longer be supporting Intel Parallel Studio on SeaWulf. Existing applications built with Intel Parallel Studio will not be affected by this change. However, moving forward we recommend the use of Intel oneAPI for access to the Intel Compilers, Intel MPI, or MKL. Please run the command "module avail intel/oneAPI" to see the different versions available.
Please note: for use of the "Classic" Intel compilers (icc, icpc, & ifort), please load version 2023.2 or earlier, as these compilers have been removed in later versions.
October 23, 2024
We have just been notified of scheduled electrical maintenance that will be performed on the circuits feeding the 96-core HBM nodes and the Xeonmax login node on Tuesday, Nov 19th. In anticipation of this maintenance the 96-core HBM queues will be disabled starting at 4:00 PM on Monday, November 18th. No other queues will be affected. We anticipate the 96-core HBM queues and Xeonmax to be back up on the afternoon of Tuesday, November 19th, pending timely completion of the electrical maintenance.
In addition, the DoIT networking team will be performing maintenance which will cause a disruption to connections to the SeaWulf login servers, Milan1 & 2, and the Open On-Demand Portal, on November 6th, between 6am-7am. The network maintenance is anticipated to last only a few minutes, and will not impact running or queued jobs.
We thank you for your patience while these necessary maintenance steps are performed.
September 5, 2024
Virtual office hours are resuming for the Fall semester!
Members of our HPC Support team will be holding virtual office hours at the following times:
- Monday 2 - 3 pm
- Friday 1 - 2 pm
If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.
September 4, 2024
In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, October 14th. The maintenance is expected to conclude by lunch time on Tuesday, October 15th.
We thank you for your patience while these necessary tests are conducted.
August 2, 2024
Due to a momentary power outage in the Laufer Center building, the HBM xeon max nodes briefly lost power. They are currently in the process of being rebooted. Jobs that were interrupted will need to be resubmitted. We apologize for this disruption and thank you for your patience.
Update: all HBM nodes have been rebooted and are available for jobs. The 28-core nodes also briefly lost power and are being rebooted. Once again, we thank you for your patience while we work to resolve these issues.
June 27, 2024
The gcc-stack and intel-stack modules have been updated to provide the latest available compiler and MPI releases. For gcc-stack this is GCC 13.2.0 and Mvapich 2.3.7. For Intel, this is the the oneAPI 24.2 release.
The compilers and MPI previously provided by gcc-stack and intel-stack are still available as individual modules for those who may prefer to use them.
May 31, 2024
We are excited to announce the publication of an article showcasing our team's work on the performance of the new Sapphire Rapids CPUs available in Seawulf. The article also delves into the significant influence of high bandwidth memory on computational efficiency. This insightful analysis is essential reading for those interested in cutting-edge advancements in high-performance computing and especially in using the new Sapphire Rapids nodes.
Read the article here.
May 30, 2024
In order to provide updated libraries and the latest functionality, the anaconda/3 module has been updated to the latest version. We recommend using this new version in most cases, but if you require the old version of the anaconda/3 module it is available under the name anaconda/3-old.
February 15, 2024
Members of our HPC Support team will be holding virtual office hours at the following times:
- Wednesday 12 - 1 pm
- Friday 11 am - 12 pm
If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.
February 15, 2024
In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, March 11th. The maintenance is expected to conclude by lunch time on Tuesday, March 12th.
We thank you for your patience while these necessary tests are conducted.
February 13, 2024
Four of our Intel Sapphire Rapids nodes have been updated to include 1 TB of DDR5 memory each. In order to simplify the user experience, the high bandwidth memory on these nodes has also been reconfigured from main memory to level 4 cache. These nodes are now accessible via the hbm-1tb-long-96core queue. For more information, please see our full list of SeaWulf queues.
February 07, 2024
UPDATE: the authentication issue described below has been resolved, and new connections to SeaWulf are no longer failing.
The campus Identity server, which we rely on for authenticating access to SeaWulf, is currently experiencing issues, resulting in new connections to our HPC environment failing.
September 14, 2023
We will be performing upgrades & maintenance on the SeaWulf Storage on Monday October 9th starting at 9:00 AM. During this maintenance window, all Seawulf login nodes and queues, as well as the storage, will NOT be available. The SeaWulf cluster is scheduled to return to normal operation by the end of business on Tuesday October 10th.
We thank you for your patience while these necessary upgrades are completed.
August 12, 2023
We have just been notified of scheduled electrical maintenance at the Campus Data Center on Tuesday August 22nd. During these necessary electrical upgrades, the 28-core and GPU (K80) queues will be offline and jobs running on those queues will be terminated, starting at 9:00 AM on Tuesday the 22nd. The maintenance is expected to be completed by the end of business on the 22nd.
No other queues will be affected, and the login nodes will remain accessible during this maintenance period.
We thank you for your patience while these necessary upgrades are completed.