Announcements

November 8, 2024

As of December 23rd, 2024 we will no longer be supporting Intel Parallel Studio on SeaWulf. Existing applications built with Intel Parallel Studio will not be affected by this change. However, moving forward we recommend use of Intel oneAPI for access to the Intel Compilers, Intel MPI, or MKL. Please run the command "module avail intel/oneAPI" to see the different versions available.

Please note: for use of the "Classic" Intel compilers (icc, icpc, & ifort), please load version 2023.2 or earlier, as these compilers have been removed in later versions.

October 23, 2024

We have just been notified of scheduled electrical maintenance that will be performed on the circuits feeding the 96-core HBM nodes and the Xeonmax login node on Tuesday, Nov 19th. In anticipation of this maintenance the 96-core HBM queues will be disabled starting at 4:00 PM on Monday, November 18th. No other queues will be affected. We anticipate the 96-core HBM queues and Xeonmax to be back up on the afternoon of Tuesday, November 19th, pending timely completion of the electrical maintenance.

In addition, the DoIT networking team will be performing maintenance which will cause a disruption to connections to the SeaWulf login servers, Milan1 & 2, and the Open On-Demand Portal, on November 6th, between 6am-7am. The network maintenance is anticipated to last only a few minutes, and will not impact running or queued jobs.

We thank you for your patience while these necessary maintenance steps are performed.

September 4, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, October 14th. The maintenance is expected to conclude by lunch time on Tuesday, October 15th.

We thank you for your patience while these necessary tests are conducted.

August 2, 2024

Due to a momentary power outage in the Laufer Center building, the HBM xeon max nodes briefly lost power.  They are currently in the process of being rebooted.  Jobs that were interrupted will need to be resubmitted.  We apologize for this disruption and thank you for your patience.

Update: all HBM nodes have been rebooted and are available for jobs.  The 28-core nodes also briefly lost power and are being rebooted.  Once again, we thank you for your patience while we work to resolve these issues.

June 27, 2024

The gcc-stack and intel-stack modules have been updated to provide the latest available compiler and MPI releases. For gcc-stack this is GCC 13.2.0 and Mvapich 2.3.7.  For Intel, this is the the oneAPI 24.2 release.

The compilers and MPI previously provided by gcc-stack and intel-stack are still available as individual modules for those who may prefer to use them.

May 31, 2024

We are excited to announce the publication of an article showcasing our team's work on the performance of the new Sapphire Rapids CPUs available in Seawulf. The article also delves into the significant influence of high bandwidth memory on computational efficiency. This insightful analysis is essential reading for those interested in cutting-edge advancements in high-performance computing and especially in using the new Sapphire Rapids nodes.

Read the article here.

May 30, 2024

In order to provide updated libraries and the latest functionality, the anaconda/3 module has been updated to the latest version.  We recommend using this new version in most cases, but if you require the old version of the anaconda/3 module it is available under the name anaconda/3-old.

February 15, 2024

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Wednesday 12 - 1 pm
  • Friday 11 am - 12 pm

If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.

February 15, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-coreTesla k80 gpuTesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, March 11th. The maintenance is expected to conclude by lunch time on Tuesday, March 12th.

We thank you for your patience while these necessary tests are conducted.

February 13, 2024

Four of our Intel Sapphire Rapids nodes have been updated to include 1 TB of DDR5 memory each. In order to simplify the user experience, the high bandwidth memory on these nodes has also been reconfigured from main memory to level 4 cache.  These nodes are now accessible via the hbm-1tb-long-96core queue. For more information, please see our full list of SeaWulf queues.