7  Workflows and Best Practices

This chapter contains information about how we run analysis locally and on UCSB’s HPC clusters.

7.1 Commands on Pod

Command Use
ssh {username}@pod-login1.cnsi.ucsb.edu login to pod
srun --pty bash -i run in a tmux session, launch an interactive job on a normal memory node (add to the queue) that doesn’t respect SBATCH commands in the sh files run
sbatch script.sh launch a non-interactive job on a normal memory node (add to the queue)
sbatch -p largemem script.sh launch a non-interactive job on a high memory node (add to the queue)
squeue -u {username} print information regarding requested jobs such as JOBID, NAME, NODES, etc.
ssh node48 open terminal in the appropriate node number such as node 48
scontrol show job {jobID} show detailed info about your job, such as endtime
top -u {username} show node memory usage and only your processes, with %MEM column showing each process’s memory usage (htop is not available on Pod)
module load R/4.1.3 gdal/2.2.3 proj/5.2 load modules in one of the terminals on the job node
R open R module in terminal
q() exit out of R module, switch wd to ~
sinfo -o "%n %m" shows the total amount of memory per node for all nodes on the server
sinfo -o "%n %m %C" | awk '$2 >= 512000' shows the core availability of each node that has at least 500GB