Also see Getting Started (Using Software) or slides from the BlueHive workshop.

Using X2go for interactive applications

Interactive terminal sessions

One way to start an interactive session is by typing

interactive

By default this will start a session in the standard queue for one minute.
To ask for 2 hours in the interactive partition,

interactive -t 2:00:00

or for 1 hour in the debug partition,

interactive -p debug -t 1:00:00

The interactive command defaults to the interactive partition, but other partitions may be specified with the -p option. The default is also 1 core and 2 GB of memory.

You can give any of the sbatch options described in Running Jobs.

Alternately you can type

srun -t 1:00:00 --pty $SHELL -l       (This will request a 1 hour allocation and run a remote login shell)

Using salloc

An alternative method involves using salloc to spawn a shell locally on the login node but that is aware of the reservation.

salloc -t 1:00:00                     (starts a new shell on the login node)
echo $SLURM_NODELIST                  (slurm sets several environment variables)
srun --pty $SHELL -l                  (connects you to a shell on the compute node)
exit                                  (exits the shell on the compute node)
exit                                  (exits the allocation)

One advantage to using salloc over srun, is that you can run multiple srun commands within the same job reservation

salloc -p standard -n 1 -t 2:00:00  (allocates the resources)
srun hostname                        (runs the command hostname on the compute node(s))
module load r                        (loads the r module)
srun --pty R                         (runs R and connects standard in and out to the local terminal)
exit                                 (exits the allocation)

Using srun to connect to compute nodes does not forward your X11 display. Instead you will need to use the interactive script

interactive -p standard -t 02:00:00

SEE ALSO

man salloc, man srun

Interactive Non-MPI parallel job

This example runs the executable on 4 separate cores

salloc -n 4 --mem-per-cpu=1000 -t 01:30:00 -p standard
srun hostname
exit

Other way (one-liner):

salloc -n 4 --mem-per-cpu=1000 -t 01:30:00 -p standard srun hostname

SEE ALSO

 
man salloc, man srun
### Interactive MPI-parallel job 
This example will launch 24 mpi tasks on 2 nodes
salloc -N 2 --ntasks-per-node=12 --mem-per-cpu=1000 -t 00:30:00 -p standard
srun hostname
exit
Other way (one-liner):
salloc -N 2 --ntasks-per-node=12 --mem-per-cpu=1000 -t 00:30:00 -p standard srun hostname
SEE ALSO
man salloc, man srun
### OpenMP and hybrid OpenMP/MPI jobs Use the option **--cpus-per-task** (or **-c** for short) with the commands **salloc** and **sbatch** to allocate cores for threads. The following commands both reserve four cores for the program:
salloc --cpus-per-task=4 srun hostname
salloc -c 4 srun hostname
The environment variable **OMP\_NUM\_THREADS** specifies the number of OpenMP threads. By default there will be one thread per core. To match with the previous allocation, one would set:
setenv OMP_NUM_THREADS 4
The following commands will both run a hybrid OpenMP/MPI job. They allocate 6 tasks and four cores per task for the program:
salloc --ntasks=6 --cpus-per-task=4 srun hostname
salloc -n 6 -c 4 srun hostname