Login nodes are your gateway to SeaWulf. They're shared resources for file management, job submission, and light development work - not for running computational jobs.
Available Login Nodes
Login Node | Address | Hardware Access | Best For |
---|---|---|---|
Legacy Login | login.seawulf.stonybrook.edu | 28-core Haswell + older GPUs | Legacy applications, established workflows |
Modern Login | milan.seawulf.stonybrook.edu | 40-core Skylake + 96-core AMD + HBM + A100 | New projects, modern HPC applications |
Specialized Login | xeonmax.seawulf.stonybrook.edu | High-bandwidth memory nodes | Memory-intensive development and testing |
Important: Different login nodes provide access to different compute hardware. Choose based on your target compute resources.
Login Node Best Practices
DO Use Login Nodes For
- File Management: Copying, moving, organizing data files
- Job Submission: Writing and submitting SLURM batch scripts
- Light Development: Compiling code, small tests, script editing
- Queue Monitoring: Checking job status with squeue, scancel
- Environment Setup: Loading modules, setting up software
- Quick File Processing: Brief text processing, data inspection
DON'T Use Login Nodes For
- Computational Work: Running simulations, data analysis, or CPU-intensive tasks
- Long-Running Processes: Jobs taking more than a few minutes
- Memory-Intensive Tasks: Loading large datasets into memory
- Parallel Processing: Multi-threaded or MPI applications
- GPU Computing: CUDA or OpenCL applications
Resource Limits: Login nodes have strict CPU time and memory limits. Processes exceeding these limits will be automatically terminated.
Connection Guidelines
SSH Connection
Connect using:
ssh username@[login-node-address]
- Load Balancing: Login addresses may route to multiple physical nodes
- Session Persistence: Use screen or tmux for persistent sessions
- File Transfer: Use scp, rsync, or sftp for data transfer
- X11 Forwarding: Add -X flag for GUI applications (use sparingly)
Etiquette and Fairness
Resource | Guideline | Why |
---|---|---|
CPU Usage | Keep processes under 10% CPU for more than a few minutes | Shared resource among all users |
Memory | Avoid loading large datasets (>1GB) | Limited memory shared by all users |
Disk I/O | Minimize intensive file operations | Affects system responsiveness for everyone |
Network | Large data transfers during off-peak hours | Bandwidth is shared |
Common Tasks and Commands
Essential Commands
Task | Command | Purpose |
---|---|---|
Check queue status | squeue -u $USER |
Monitor your jobs |
Submit job | sbatch script.sh |
Submit batch job |
Cancel job | scancel [jobid] |
Cancel running/queued job |
Check disk usage | quota -u $USER |
Monitor storage limits |
Load software | module load [software] |
Access installed applications |
Interactive session | salloc |
Get interactive compute access |
Pro Tip: Use
interactive
command or salloc
to get a compute node for development work instead of using login nodes for intensive tasks.Troubleshooting
Common Issues
- Process Killed: Likely exceeded CPU or memory limits - move work to compute nodes
- Slow Response: High load on login node - try different login node or wait
- Connection Refused: Login node may be down - try alternative login address
- Disk Full: Check quota with
quota -u $USER
and clean up files - Module Not Found: Use
module avail
to see available software
Remember: Login nodes are shared gateways, not compute resources. Always move computational work to appropriate queues via job submission.