SeaWulf FAQs

Frequently Asked Questions

Filter by category:

No FAQs match your search criteria.

Try a different search term or filter.

Getting Started

How do I get started with SeaWulf?

See our Quick Start Guide for essential information to get you started quickly, including account setup, connecting, and running your first job.

What is SeaWulf and what can I do with it?

SeaWulf is Stony Brook's high-performance computing cluster for research computing. Learn about the cluster's capabilities, hardware specifications, and available resources in our Cluster Overview.

How do I request a SeaWulf account?

Follow the instructions in Requesting Accounts & Project Numbers. You'll need a faculty sponsor and will typically receive access within 1-2 business days after approval.

How do I get a project allocation on SeaWulf?

Project allocations provide dedicated storage space for research groups. See Requesting Accounts & Project Numbers for the project request process.

Where can I find definitions of HPC terminology?

Check our Glossary of Terms for definitions of common HPC and SeaWulf-specific terminology like nodes, cores, partitions, and more.

Is there a quick reference for common commands?

Yes! See our HPC Cheat Sheet for a quick reference guide of commonly used SLURM commands and operations.

Access and Accounts

How do I connect to SeaWulf?

See our Connecting to the Cluster guide for instructions on using SSH, MobaXterm, and other remote access tools to connect to SeaWulf.

How do I set up two-factor authentication with DUO?

Follow the DUO Authentication setup guide to configure two-factor authentication for secure access to SeaWulf.

How do I set up passwordless SSH login?

See Passwordless SSH for step-by-step instructions on setting up SSH keys for secure, password-free login.

How do I reset my SeaWulf password?

To reset your SeaWulf password, visit the Stony Brook IT password reset page.

What does "Connection reset by peer" or "Your account has been locked out" mean?

These errors usually occur after too many failed login attempts, which can result in either an IP block or a DUO lockout.

  • IP block: Requires assistance from HPC Support to resolve.
  • DUO lockout: Can happen if authentication requests are missed. DUO lockouts typically clear on their own within a few hours but can be expedited by submitting a request to DoIT Support.

If problems persist, ensure you are using the correct password and that DUO authentication is functioning properly. For more details, see our Connecting to the Cluster guide.

How do I use graphical applications on SeaWulf?

See our X11 Forwarding guide to learn how to run GUI applications on SeaWulf and display them on your local computer.

How do I get remote troubleshooting support on SeaWulf?

Submit a ticket to HPC Support with a detailed description of your issue, including:

  • Error messages you received
  • Commands you ran and their output
  • Your username and the node where the issue occurred

For real-time assistance, check if walk-in office hours are available. See When to Ask for Help for more guidance.

Granting Remote Access: Only do this after submitting a ticket and being asked by HPC Support. On SeaWulf, run:

module load remote-assistance
request-support -t TICKET_NUMBER

This creates a temporary session that allows HPC Support to troubleshoot using your environment. You can cancel it at any time with:

request-support --cancel

Architecture and Resources

What queues (partitions) are available on SeaWulf?

See our Queues Table for complete specifications including cores, memory, time limits, and GPU availability for each queue.

Which queue should I use for my job?

Use our Queue Selection Guide which includes a decision tree and practical tips to help you choose the most efficient queue for your workload.

What if my job doesn't need all the resources on a node?

Use the shared node queues! SeaWulf provides shared queues where multiple users' jobs can run simultaneously on the same node.

Specify only the CPUs and memory you actually need using:

#SBATCH --cpus-per-task=N
#SBATCH --mem=XGB

This is more efficient than requesting an entire node and helps your job start faster. The main shared queues are short-shared and long-shared. See Shared Nodes and Queue Selection Guide for details.

How do I use GPU nodes?

See our GPU Nodes guide for requesting GPUs, loading CUDA modules, and example GPU job scripts.

What are the high-bandwidth memory (HBM) nodes?

These are our Sapphire Rapids nodes with high-bandwidth memory, ideal for memory-intensive applications. The recommended login node for HBM nodes is xeonmax.seawulf.stonybrook.edu, and the HBM partitions start with hbm-.

See High-Bandwidth Memory (HBM) Nodes for specifications and usage guidelines, and Getting Started with Intel Sapphire Rapids Xeon Max Nodes for a more extensive guide.

What about the AMD Milan nodes?

The AMD Milan nodes are integrated throughout our standard queues and provide excellent performance for most workloads. See the Queue Selection Guide for choosing the right queue for your needs.

Job Scheduling and Management

How do I submit a job on SeaWulf?

Use the sbatch command to submit batch jobs:

sbatch myjob.sh

For complete details on SLURM commands, see our SLURM Overview & Commands. For help writing job scripts, see Writing Job Scripts.

Can you show me an example job script?

Yes! See our collection of Example Jobs including serial, parallel, OpenMP, MPI, GPU, and multi-node job examples with detailed explanations.

How do I check the status of my jobs?

Use squeue -u $USER to see your running and pending jobs. For detailed job information and monitoring strategies, see Job Management.

How do I cancel a job?

Use scancel <jobid> to cancel a specific job, or scancel -u $USER to cancel all your jobs. See SLURM Overview & Commands for more job control commands.

How do I run interactive jobs?

See our Interactive Jobs guide for running jobs interactively on compute nodes for development, testing, and debugging.

How do I monitor my job's resource usage?

Use sstat for running jobs and seff <jobid> for completed jobs to see CPU and memory efficiency. For comprehensive monitoring strategies, see Job Management.

Why is my job stuck in the queue?

Jobs may wait in the queue for several reasons: all nodes in your requested queue are busy, you've reached your fairshare limit, or your resource requests exceed what's available. Check your job's reason code with:

squeue -u $USER

Common reasons include "Priority" (other jobs have higher priority), "Resources" (waiting for nodes to become available), or "QOSMaxCpuPerUserLimit" (you've hit your CPU limit). Consider using a different queue, requesting fewer resources, or waiting for your fairshare to improve. See Fairshare and Job Priority and Job Management for optimization strategies.

How do I format squeue output to see more useful information?

Use the --format or -o flag with squeue to customize output:

squeue -u $USER -o "%.18i %.9P %.30j %.8u %.2t %.10M %.6D %R"

This shows job ID, partition, job name, user, status, time running, nodes, and reason. You can create an alias in your ~/.bashrc:

alias sq='squeue -o "%.18i %.9P %.30j %.8u %.2t %.10M %.6D %R"'

See Job Management for more monitoring commands.

How do I handle output from my job?

By default, SLURM writes output to slurm-<jobid>.out in the directory where you submitted the job. Control this with:

#SBATCH --output=filename.out
#SBATCH --error=filename.err

Use %j in the filename to include the job ID (e.g., output_%j.out). For better organization, create an output directory:

#SBATCH --output=logs/job_%j.out

Note: Output is buffered, so you may not see results immediately.

See Writing Job Scripts for more options.

How do I convert PBS scripts to SLURM?

Key conversions: #PBS#SBATCH, qsubsbatch, qstatsqueue, qdelscancel. For detailed migration guidance, contact HPC Support or see SLURM Overview & Commands.

Storage and File Transfer

What storage spaces are available on SeaWulf?

See File Storage Layout for information about home directories, scratch space, and project directories, including their purposes and usage guidelines.

How do I check my storage quota?

Use the myquota command to check your home directory usage. For more details, see Checking Storage Quotas for usage instructions and Storage Policies for limits and guidelines.

How do I request more storage or a project space?

For storage increase requests or new project space allocation, see File Storage Layout or contact HPC Support with your requirements.

How do I transfer files to and from SeaWulf?

See File Transfer with rsync, scp, sftp for command-line methods. For large transfers, consider using Globus File Transfer.

How do I use Globus for large file transfers?

Globus provides reliable, high-performance transfers for large datasets. See Globus File Transfer for setup instructions and usage guidelines.

How do I backup my SeaWulf data?

See Backing Up Data with rclone for instructions on backing up to Box or Google Drive.

How do I access my SeaWulf files from my desktop?

See CIFS Access for mounting SeaWulf storage (both RPX and general project spaces) on your local computer.

Can I use SSHFS to mount SeaWulf storage?

Yes! SSHFS allows you to mount SeaWulf directories on your local machine and access remote files as if they were stored locally.
On Linux/macOS:

sshfs -o reconnect,auto_cache <username>@login.seawulf.stonybrook.edu:<path> <mount_point>

On Windows: Use SFTPNetDrive (free for non-commercial use):
Unmount with fusermount -u <mount_point> or sudo umount <mount_point>.

For installation instructions and full details, see the SSHFS documentation.

Software and Environment

What are environment modules?

Modules let you dynamically load and manage different software packages and versions. See Using Modules for an introduction to the module system.

How do I see what software is available?

Use module avail to see all available software modules. Use module spider <name> to search for specific software. See Using Modules for more commands.

What modules should I load for GCC compilers?

Load the GCC module with module load gcc. For specific version information and compiler usage, see GCC.

What modules should I load for Intel compilers?

Use module load intel for Intel Parallel Studio or module load intel/oneAPI compiler for Intel oneAPI compilers. For detailed information, see Intel OneAPI.

How do I install my own software?

See Managing Your Own Software for best practices and Installing from Source for compiling software from source code.

How do I install Anaconda packages in my home or project directory?

Load the anaconda3 module and create a conda environment in your desired location:

module load anaconda3
conda create -p /path/to/your/env python=3.x

For your home directory use ~/.conda/envs/myenv, or for project space use /gpfs/projects/YourProject/envs/myenv. Activate it with:

conda activate /path/to/your/env

Then install packages normally with conda install package_name.

Warning: Be mindful of home directory quotas; project directories are better for large environments.

See Conda Environments for complete instructions.

How do I create Python environments?

See Conda Environments for Conda environments or Python Package Management for using venv and pip options.

How do I run Docker or Singularity container images on SeaWulf?

SeaWulf supports Singularity containers, but not Docker directly. You can pull Docker images and convert them to Singularity format:

module load singularity
singularity pull docker://repository/image:tag

Run containers with:

singularity exec image.sif command

Or start a shell with:

singularity shell image.sif

Singularity containers can access your files by default and work within SLURM jobs.

Note: Building containers from definition files requires elevated privileges not available on SeaWulf. Pull pre-built containers from repositories like Docker Hub, or build custom containers locally and transfer them to SeaWulf.

For more details on obtaining containers, running commands, and best practices, see the Singularity documentation or contact HPC Support.

What does "WARNING: LD_LIBRARY_PATH_modshare exists..." mean?

This warning appears when modules are loaded multiple times or in conflicting ways, typically in your ~/.bashrc or job scripts. It's usually harmless but indicates redundant module loads.

To fix it, remove any module load commands from your ~/.bashrc file and only load modules in your job scripts or interactively when needed. Check for duplicate module loads with:

module list

The warning won't affect your jobs but cleaning up module loads will make your environment more predictable. See Using Modules for best practices.

Why do I get a Segmentation fault when using Intel MPI?

Segmentation faults with Intel MPI often occur due to stack size limits or memory issues. Try increasing stack size in your job script:

ulimit -s unlimited

Also ensure you're loading compatible module versions: Intel MPI must match your Intel compiler version. Check that you've compiled your code with the same MPI library you're using at runtime. For large-scale jobs, verify you have enough memory allocated with:

#SBATCH --mem=<amount>GB

If problems persist, try the OpenMPI module instead or contact HPC Support with your specific error details.

Why do Intel compilers take so long to start running?

Intel compilers perform license checks at startup, which can be slow on login nodes, especially during busy times. The delay is normal and doesn't indicate a problem.

Tip: To minimize wait times, compile on compute nodes using an interactive job rather than login nodes:

salloc -N 1 -n 8 -p short-40core

Make sure you're loading the Intel module correctly (module load intel for Parallel Studio or module load intel-oneapi for oneAPI). Once compilation starts, performance is normal. See Using Modules for proper module usage.

How do I parallelize my work?

For embarrassingly parallel tasks and OpenMP/MPI parallelization, see the old FAQ articles until new documentation is available. Contact HPC Support for guidance on parallelization strategies.

How can I parallelize my MATLAB jobs?

See the old MATLAB parallelization FAQ until new documentation is available, or contact HPC Support for assistance with MATLAB parallel computing.

Policies and Administrative

What are the rules for using login nodes?

Login nodes are for light tasks like editing files, compiling code, and submitting jobs—not for running computations. See Login Node Etiquette for appropriate use and resource limitations.

What is fairshare and how does it affect my jobs?

Fairshare ensures equitable access to resources by adjusting job priorities based on recent usage. See Fairshare and Job Priority Tips to understand job scheduling priorities and maximize throughput.

What are the best practices for efficient resource usage?

See Node Utilization for guidelines on efficient use of compute resources and best practices for job submission.

When should I ask for help?

See When to Ask for Help for troubleshooting guidance and when to contact HPC Support.

How should I acknowledge IACS in my publication?

Please include this acknowledgment in your publications:

"The authors would like to thank Stony Brook Research Computing and Cyberinfrastructure and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system, which was made possible by $1.85M in grants from the National Science Foundation (awards 1531492 and 2215987) and matching funds from the Empire State Development's Division of Science, Technology and Innovation (NYSTAR) program (contract C210148)."

For more specific acknowledgments regarding particular systems (SeaWulf, NVwulf) or if you received special support, contact HPC Support. Proper acknowledgment helps us demonstrate the value of HPC resources and secure continued funding and support.

Are there HIPAA concerns associated with SeaWulf?

SeaWulf is NOT HIPAA-compliant and should not be used for protected health information (PHI) or any data subject to HIPAA regulations.

If your research involves human subject data or PHI, you must use appropriate secure computing resources. Contact Stony Brook's Institutional Review Board (IRB) and DoIT for guidance on compliant computing resources. Never upload, process, or store PHI on SeaWulf under any circumstances.

Instead, request access to the ClinWulf cluster which is HIPAA-compliant.

How do I make sure permissions are set properly in my project space?

Project directories should have group ownership set to your project group. Check with:

ls -la /gpfs/projects/YourProject

Files should typically be rw-rw-r-- (664) and directories rwxrwxr-x (775) so group members can collaborate. The Linux Group owner for all files should match the name of the project space. If you find a specific file or directory has the same group owner as user owner, reach out to the user to ask them to setgrp the files. HPC support can also help in handling this.

Set defaults so new files inherit group ownership:

chmod g+s /gpfs/projects/YourProject

Fix existing permissions:

chmod -R 664 files
chmod -R 775 dirs

Note: For sensitive data, use more restrictive permissions.

If you need to change the project group, contact HPC Support. See File Storage Layout for more information.

Open OnDemand

What is Open OnDemand?

Open OnDemand provides web-based access to HPC resources without requiring SSH. See Overview of Open OnDemand for an introduction to the platform and available applications.

How do I access Open OnDemand?

See Overview of Open OnDemand for access instructions and getting started with the web portal.

How do I run Jupyter notebooks on SeaWulf?

Use Open OnDemand's Jupyter app for browser-based notebooks. See Jupyter for detailed instructions.

Can I use RStudio on SeaWulf?

Yes! See RStudio for using RStudio Server through Open OnDemand for R analysis and visualization.

Can I run MATLAB through my browser?

Yes! See MATLAB for browser-based MATLAB access through Open OnDemand.

Can I get a Linux desktop on SeaWulf?

Yes! See SeaWulf Desktop for accessing a full Linux desktop environment through your browser.

Can I use VS Code on SeaWulf?

Yes! See Code Server for running VS Code in your browser on SeaWulf compute resources.

How do I submit jobs through Open OnDemand?

See Job Composer for submitting and managing batch jobs through the Open OnDemand web interface.

Note: If the article you’re looking for isn’t listed above, it might be in our Legacy FAQ Page.

 

Still Need Help? The best way to report your issue or make a request is by submitting a ticket.