How do I submit interactive jobs?

This KB Article References: High Performance Computing
This Information is Intended for: Instructors, Researchers, Staff, Students
Created: 01/12/2017
Last Updated: 03/05/2025

Requesting an Interactive Session in Slurm

Before you begin, ensure that the Slurm module is loaded on your system:

module load slurm

Slurm provides a way to launch interactive jobs, which can be useful for testing, debugging, or exploring data interactively. To start an interactive session, use the srun command with the --pty directive to open a pseudo-terminal for your session:

srun -p  --pty bash

You can further customize the command by adding options like:

  • -N <# of nodes>: Specifies the number of nodes to allocate.
  • -t : Sets a time limit for the job.
  • -n : Specifies the number of tasks to run on each node.

For example, if you want to use 1 node with 28 tasks on the GPU queue for 8 hours, the command would look like:

srun -N 1 -n 28 -t 8:00:00 -p gpu --pty bash

Running an Interactive Job

Once the interactive session starts, you'll be transferred from the login node to the allocated node, and your environment variables (such as modules you have loaded) will be carried over. This means you can immediately run your programs or test code interactively. For example, to compile and run an MPI program:

mpicc source_code.c -o my_program
mpirun ./my_program

Any output generated by your program will be printed directly to your terminal unless redirected. When you are done with your interactive session, simply type:

exit

This will return you to the login node.

Using salloc for MPI Across Multiple Nodes

If you need to use multiple nodes for your interactive job (such as when running MPI jobs), you should use salloc instead of srun. For example:

salloc -N 2 --ntasks-per-node=1 -p hbm-short-96core

Once you run this command, Slurm will allocate the requested nodes, and you'll see output like this:

salloc: Granted job allocation 943526
salloc: Waiting for resource configuration
salloc: Nodes xm[040,055] are ready for job

Here, the nodes xm040 and xm055 have been allocated for the job.

After allocation, you'll need to load any required modules for your environment. For instance, if you're using mpi4py for MPI in Python, you would load the module like this:

module load mpi4py/latest

With your environment set up, you can now execute your MPI job using mpirun. The example below runs a small Python script that demonstrates MPI functionality across the two allocated nodes:

mpirun python -c "from mpi4py import MPI; comm = MPI.COMM_WORLD; name = MPI.Get_processor_name(); print(f'Hello from rank {comm.rank} on node {name}')"

The output from this command will look something like this:

Hello from rank 0 on node xm040
Hello from rank 1 on node xm055

This shows that the MPI job is being run across both nodes, with each rank executing on a different node.

Once you're done with the interactive job, you can exit the allocated resources by typing:

exit
Article Topic