BRC (Savio)

This page describes how to get setup at the Berkeley Research Computing center (BRC) on Savio.

Berkeley Research Computing (BRC) hosts the Savio supercomputing cluster. Savio operates on a condo computing model, in which many PI's and researchers purchase nodes to add to the system. Nodes are accessible to all who have access to the system, though priority access will be given to contributors of the specific nodes. BRC provides 3 types of allocations: Condo - Priority access for nodes contributed by the condo group. Faculty Computing Allowance (FCA) - Limited computing time provided to each Faculty member using Savio.

Setting up a BRC account

Please make sure you will actually be performing work on Savio before requesting an account. To get an account on Savio, navigate to the BRC portal, register an account, make sure to select the appropriate allocation, and wait for approval from the BRC team. Typically, most students and postdocs will be running on co_lsdi. For more information, visit (Berkeley Research Computing)[http://research-it.berkeley.edu/services/high-performance-computing]

After your account is made, you'll need to set up 2-factor authentication. This will allow you to generate "one time passwords" (OTPs). You will need append a OTP to the end of your NIM password each time you log on to a NERSC cluster. We recommend using Google Authenticator, although any OTP manager will work.

Logging on (Setup):

You must use the SSH protocol to connect to BRC. Make sure you have SSH installed on your local computer (you can check this by typing which ssh). Make sure you have a directory named $HOME/.ssh on your local computer (if not, make it).

We also advise you to configure a ssh socket so that you only have to log into BRC with a OTP only once per session (helpful if you are scp-ing things). To do this:

  1. Create the directory ~/.ssh/sockets if it doesn't already exist.

  2. Open your ssh config file /.ssh/config (or create one if it doesn't exist) and add the following:

    Host *.brc.berkeley.edu
    ControlMaster auto
    ControlPath ~/.ssh/sockets/%r@%h-%p
    ControlPersist 600

After your account is made, you'll need to set up 2-factor authentication. We recommend using Google Authenticator, although any OTP manager will work.

You should now be ready to log on!

Logging on to BRC

To access your shiny new savio account, you'll want to SSH onto the system from a terminal.

ssh your_username@hpc.brc.berkeley.edu

You will be prompted to enter your passphrase+OTP (e.g. <your_password><OTP> without any spaces). This will take you to your home directory. You may also find it useful to set up an alias for signing on to HPC resources. To do this, add the following line to your bash_profile:

alias savio="ssh your_username@hpc.brc.berkeley.edu"

Now you will be able to initialize a SSH connection to Savio just by typing savio in the command line and pressing enter.

Running on BRC

Under the condo account co_lsdi, we have exclusive access to 28 KNL nodes. Additionally, we have the ability to run on other nodes at low priority mode.

Accessing Software binaries

Software within BRC is managed through modules. You can access precompiled, preinstalled software by loading the desired module.

module load <module_name>

To view a list of currently installed programs, use the following command:

module avail

To view the currently loaded modules use the command:

module list

Software modules can be removed by using either of the following two commands:

module unload <module_name>
module purge

Accessing In-House software packages

The Persson Group maintains their own versions of popular codes such as VASP, GAUSSIAN, QCHEM and LAMMPS. To access these binaries, ensure that you have the proper licenses and permissions, then append the following line to the .bashrc file in your root directory:

export MODULEPATH=${MODULEPATH}:/global/home/groups/co_lsdi/sl7/modfiles

Using Persson Group Owned KNL nodes

To run on the KNL nodes, use the following job script, replacing with the desired job executable name. To run vasp after loading the proper module, use vasp_std, vasp_gam, or vasp_ncl.

#!bin/bash -l
#SBATCH --nodes=1                 #Use 1 node
#SBATCH --ntasks=64               #Use 64 tasks for the job
#SBATCH --qos=lsdi_knl2_normal    #Set job to normal qos
#SBATCH --time=01:00:00           #Set 1 hour time limit for job
#SBATCH --partition=savio2_knl    #Submit to the KNL nodes owned by the Persson Group
#SBATCH --account=co_lsdi         #Charge to co_lsdi accout
#SBATCH --job-name=savio2_job     #Name for the job

mpirun --bind-to core <executable>

Running on Haswell Nodes(on Low Priority)

To run on Haswell nodes, use the following slurm submission script:

#!bin/bash -l
#SBATCH --nodes=1                 #Use 1 node
#SBATCH --ntasks_per_core=1       #Use 1 task per core on the node
#SBATCH --qos=savio_lowprio       #Set job to low priority qos
#SBATCH --time=01:00:00           #Set 1 hour time limit for job
#SBATCH --partition=savio2        #Submit to the haswell nodes
#SBATCH --account=co_lsdi         #Charge to co_lsdi accout
#SBATCH --job-name=savio2_job     #Name for the job

mpirun --bind-to core <executable>

Last updated