LRC (Lawrencium)

This page describes how to get set up to run calculations at LRC on the Lawrencium cluster.

Lawrence Berkeley National Laboratory's Laboratory Research Computing (LRC) hosts the Lawrencium supercomputing cluster. LRC operates on a condo computing model, in which many PI's and researchers purchase nodes to add to the system. Nodes are accessible to all who have access to the system, though priority access will be given to contributors of the specific nodes. BRC provides 3 types of allocations: Condo - Priority access for nodes contributed by the condo group. PI Computing Allowance (PCA) - Limited computing time provided to each PI member using Lawrencium.

Persson Group ES1 GPU Node Specs:

Setting up an LRC account

Please make sure you will actually be performing work on Lawrencium before requesting an account. To get an account on Lawrencium, navigate to LRC portal, register an account and wait for approval form the LRC team. and the user agreement to a one-time password token generator and your account. You will also need to set up a MFA token for your account.

Before logging on (setup)

You must use the SSH protocol to connect to Lawrencium. Make sure you have SSH installed on your local computer (you can check this by typing which ssh). Make sure you have a directory named $HOME/.ssh on your local computer (if not, make it).

After your account is made, you'll need to set up 2-factor authentication. We recommend using Google Authenticator, although any OTP manager will work.

You should now be ready to log on!

Logging on to LRC

To access your shiny new Lawrencium account, you'll want to SSH onto the system from a terminal.

ssh your_username@lrc-login.lbl.gov

You will be prompted to enter your pin+OTP (e.g. <your_pin><OTP> without any spaces). This will take you to your home directory. You may also find it useful to set up an alias for signing on to HPC resources. To do this, add the following line to your bash_profile:

alias lawrencium="ssh <your_username>@lrc-login.lbl.gov"

Now you will be able to initialize a SSH connection to Savio just by typing lawrencium in the command line and pressing enter.

Running on LRC

Under the condo accounts condo_mp_cf1 (56 cf1 nodes) and condo_mp_es1 (1 gpu node), we have exclusive access to certain Lawrencium nodes. If you do not know which of these node groups you are supposed to be running on, you probably shouldn't be running on Lawrencium. Additionally, we have the ability to run on ES1 GPU nodes at low priority mode (es_lowprio).

Accessing Software binaries

Software within LRC is managed through modules. You can access precompiled, preinstalled software by loading the desired module.

module load <module_name>

To view a list of currently installed programs, use the following command:

module avail

To view the currently loaded modules use the command:

module list

Software modules can be removed by using either of the following two commands:

module unload <module_name>
module purge

Using Persson Group Owned LRC nodes

To run on the nodes, use the following job script, replacing with the desired job executable name.

#!/bin/bash
# Job name:
#SBATCH --job-name=<job_name>
#
# Partition:
#SBATCH --partition=cf1
#
# QoS:
#SBATCH --qos=condo_mp_cf1
#
# Account:
#SBATCH --account=lr_mp
#
# Nodes (IF YOU CHANGE THIS YOU MUST CHANGE ntasks too!!!):
#SBATCH --nodes=1
#
# Processors (MUST BE 64xNUM_NODES ALWAYS!!!):
#SBATCH --ntasks=64
#
# Wall clock limit:
#SBATCH --time=24:00:00

## Run command

module load vasp/6.prerelease-vdw
export OMP_PROC_BIND=true
export OMP_PLACES=threads
export OMP_NUM_THREADS=1 # NEVER CHANGE THIS!!

mpirun --bind-to core <executable>
#!/bin/bash
# Job name:
#SBATCH --job-name=<job_name>
#
# Partition:
#SBATCH --partition=es1
#
# QoS:
#SBATCH --qos=condo_mp_es1
#
# Account:
#SBATCH --account=lr_mp
#
# GPUs:
#SBATCH --gres=gpu:2
#
# CPU cores:
#SBATCH --cpus-per-task=8
#
# Constraints:
#SBATCH --constraint=es1_v100
#
# Wall clock limit:
#SBATCH --time=24:00:00

export CUDA_VISIBLE_DEVICES=0,1
module load cuda/10.0
#!/bin/bash
# Job name:
#SBATCH --job-name=<job_name>
#
# Partition:
#SBATCH --partition=es1
#
# QoS:
#SBATCH --qos=es_lowprio
#
# Account:
#SBATCH --account=lr_mp
#
# GPUs:
#SBATCH --gres=gpu:2
#
# CPU cores:
#SBATCH --cpus-per-task=8
#
# Constraints:
#SBATCH --constraint=es1_v100
#
# Wall clock limit:
#SBATCH --time=24:00:00

export CUDA_VISIBLE_DEVICES=0,1
module load cuda/10.0

Using Non-Persson Owned LRC Nodes

In addition to using Persson owned nodes (lower wait times, limited capacity), you can also submit directly to the main LRC queue. For jobs that aren't on a time crunch low-turnaround time, this can be a great option because it will not saturate our condo nodes. All of the instructions are identical to above except the accoutn should be set to pc_mp instead of lr_mp.

Interactive Jobs on the Group GPU Condo Node

To run an interactive session on the GPU node, use the following two commands to provision and log in to the node: salloc --time=24:00:00 --nodes=1 -p es1 --gres=gpu:2 --cpus-per-task=8 --qos=condo_mp_es1 --account=lr_mp srun --pty -u bash -i

Last updated