Using Conda environments on NERSC systems
Conda envs are a great way to manage package/library versions. Frequently we need a specific configuration and package versions for one project's needs can conflic with another project's needs. Conda envs allow us to create separate "environments" where you can be free to install any package version you like without it affecting anything outside of the environment.
By default, conda
is not on PATH and you may get an error when trying to call `conda ...`.
Get it on PATH
by:
Note: you can that line to your ~/.bash_profle
to prevent you having to do this each time you log in.
To create a new named conda enviornment, use the following commands. In this example we create an enviornment named my_env
with python 3.8
conda create -n my_env python=3.8
To enter your environment, activated it using its name:
conda activate my_env
To list your available enviornments:
conda env list
By default, if we create a new enviroment, it will be stored in the $HOME
directory (e.g. /global/homes/m/<username>/.conda/envs
). Each of us has a quota of 40G for $HOME
, and sometimes conda environments can get quite big, which can cause out-of-quota problem. So, let's change the default environment directory to avoid this.
You should have access to /global/common/software/matgen/
(or /global/common/software/jcesr
, depending on the account
you have access to). Create a directory under your username (to store all your software), e.g.
Within your directory, create a directory to store conda environemts (assuming we want to store it at .../<username>/conda/envs
):
Then, config conda to prepend to envs_dirs
what we've created:
This is all you need to do.
To ensure it's successful, you can view conda settings by
You will find something like
Alternatively, you can open ~/.condarc
to see all the changes you've made. You can even directly edit it to remove the changes or add new ones.
When you install a package, the package will first be downloaded to $HOME
, (e.g. /global/homes/m/<username>/.conda/pkgs
). You can change the default package storage directory as well:
Agian, you may need to change matgen
to the accout you have access to, and, of course, change <username>
to your username.
Find solutions to frequently encountered issues here
e.g. "home directory over quota"
,
We are limited to 40 GB of files in our home directories. This error indicates you have to many files.
Run showquota
or myquota
to see your file system space usage. If 'home' is 40GB or greater that's your issue.
Run du -sh *
in your home directory and look for any large directories.
Run du -sh .[^.]*
in your home directory and look for any large dot directories. Common issues are large .cache
directory and large .conda
directory
Alternatively, you can use the Ncurses Disk Utility tool via shifter to view your home directory usage with an ncurses GUI:
Quick solution: conda clean --all
Permanent Solution: Change conda env directory to a project directory
Quick solution: Delete it! rm -rf .cache
Is this safe to do? Yes (probably) but you may want to move a backup of the directory to a project directory if you're worried about it.
This page describes setup for running calculations at NERSC's Perlmutter HPC.
Ask Kristin about whether you will be running at NERSC and, if so, under what account / repository to charge.
Request a NERSC account through the NERSC homepage (Google “NERSC account request”).
A NERSC Liason or PI Proxy will validate your account and assign you an initial allocation of computing hours.
At this point, you should be able to log in, check CPU-hour balances, etc. through “NERSC NIM” and “My NERSC” portals
In order to log in and run jobs on the various machines at NERSC, review the NERSC documentation.
In order to load and submit scripts for various codes (VASP, ABINIT, Quantum Espresso), NERSC has lots of information to help. Try Google, e.g. [“NERSC VASP”](https://docs.nersc.gov/applications/vasp/).
... * Note that for commercial codes such as VASP, there is an online form that allows you to enter your VASP license, which NERSC will confirm and then allow you access to. Log in to https://help.nersc.gov/, select "Open Request", and fill out the "VASP License Confirmation Request" form.
Please make a folder inside your project directory and submit all your jobs there, as your home folder has only about 40GB of space. For example, for matgen project, your work folder path should be something like the following:
/global/cfs/projectdirs/matgen/
YOUR_NERSC_USERNAME
You can also request a mongo database for your project to be hosted on NERSC. Google [“MongoDB on NERSC”](https://docs.nersc.gov/services/databases/) for instructions. Patrick Huck can also help you get set up and provide you with a preconfigured database suited for running Materials Project style workflows.
(Optional) Set up a conda environment.
This tutorial provides a brief overview of setting yourself up to run jobs on NERSC. If any information is unclear or missing, feel free to edit this document or contact Kara Fong.
Contact the group’s NERSC Liaison (currently Rohith Srinivaas Mohanakrishnan and Howard Li, see [Group Jobs list](https://materialsproject.gitbook.io/persson-group-handbook/group-resources/group-jobs)). They will help you create an account and allocate you computational resources. You will then receive an email with instructions to fill out the Appropriate Use Policy form, set up your password, etc.
Once your account is set up, you can manage it at NERSC's [iris](https://iris.nersc.gov/).
You must use the SSH protocol to connect to NERSC. Make sure you have SSH installed on your local computer (you can check this by typing which ssh
). You will also need to set up multi-factor authentication with NERSC. This will allow you to generate "one time passwords" (OTPs). You will need append a OTP to the end of your NIM password each time you log on to a NERSC cluster.
We also advise you to configure the NERSC sshproxy script so that you only have to log into NERSC with a OTP once per 24 hours (helpful if you are scp-ing things). To do this:
Download the script from NERSC to your home folder
At the terminal type ./sshproxy.sh -u <nersc_username>
Enter your password and OTP
You should now be able to log in without authenticating for 24 hours!
You can set up an alias for Perlmutter, or you can ssh into Perlmutter by running the following command in the terminal
ssh perlmutter-p1.nersc.gov
For small files, you can use SCP (secure copy). To get a file from NERSC, use:
To send a file to NERSC, use:
To move a larger quantity of data using a friendlier interface, use Globus Online.
You can also "mount" NERSC's filesystem in VSCode following the [guide here](https://code.visualstudio.com/docs/remote/ssh)
Running and monitoring jobs:
The following instructions are for running on Perlmutter.
Most jobs are run in batch mode, in which you prepare a shell script telling the batch system how to run the job (number of nodes, time the job will run, etc.). NERSC’s batch system software is called SLURM. Below is a simple batch script example, copied from the NERSC website:
Here, the first line specifies which shell to use (in this case bash). The keyword #SBATCH is used to start directive lines (click here for a full description of the sbatch options you can specify). The word “srun” starts execution of the code.
To submit your batch script, use sbatch myscript.sl
in the directory containing the script file.
Below are some useful commands to control and monitor your jobs:
For Perlmutter GPU, the job scripts will look similar:
For more options in the executable, please refer to NERSC documentation. To work with the high-throughput infrastructure, please refer to "Fireworks & Atomate" in this handbook.
You specify which queue to use in your batch file. Use the debug queue for small, short test runs, and the regular queue for production runs.
In order to automatically manage job submission at NERSC, you can use [scrontab](https://docs.nersc.gov/jobs/workflow/scrontab/). You can submit jobs periodically even when you are not signed in to any NERSC systems and perhaps reduce the queue time from 5-10 days to a few hours. This is possible because of the way jobs are managed in atomate/fireworks. Please make sure you feel comfortable submitting individual jobs via atomate before reading this section.
In atomate, by using --maxloop 3 for example when setting rocket_launch in your my_qadapter.yaml, after 3 trials in each minute if there are still no READY jobs available in your Launchpad Fireworks would stop the running job on NERSC to avoid wasting computing resources. On the other hand, if you have Fireworks available with the READY state and you have been using crontab for a few days, even if the jobs you submitted a few days ago start running on NERSC, they would pull any READY Fireworks and start RUNNING them reducing the turnaround from a few days to a few hours! So how to setup crontab? Please follow the instructions here: 1. ssh to the node where you want to setup the crontab; try one that is easy to remember such as cori01 or edison01; for logging in to a specific node just do for example “ssh cori01” after you log in to the system (Cori in this example).
Type and enter: scrontab -e
Now you can setup the following command in the opened vi editor. What it does is basically running the SCRIPT.sh file every 120 minutes of every day of every week of every month of every year (or simply /120 *):
Setup your SCRIPT.sh like the following: (as a suggestion, you can simply put this file and the log file which keeps a log of submission states in your home folder):
The last line of this 3-line file is really what submitting your job inside your production folder with the settings that you set in FW_config.yaml file. See atomate documentation for more info.
Please make sure to set your PRODUCTION_FOLDER under /global/project/projectdirs/ that has much more space than your home folder and it is also backed up. Make sure to keep an eye on how close you are to disk space and file number limitations by checking https://my.nersc.gov/ periodically.
Jupyter notebooks are quickly becoming an indispensable tool for doing computational science. In some cases, you might want to (or need to) harness NERSC computing power inside of a jupyter notebook. To do this, you can use NERSC’s new Jupyterhub system at https://jupyter.nersc.gov/. These notebooks are run on special jupyter nodes on Perlmutter, and can also submit jobs to the batch queues (see [here](https://docs.nersc.gov/services/jupyter/) for details). All of your files and the project directory will be accessible from the Jupyterhub, but your conda envs won’t be available before you do some configuration.
To set up a conda environment so it is accessible from the Jupyterhub, activate the environment and setup an ipython kernel. To do this, run the command “pip install ipykernel”. More info can be found at http://bit.ly/2yoKAzB.
DISCLAIMER: Only use job packing if you have trouble with typical job submission. The following tip is not 100% guaranteed to work., and is based on limited, subjective experience on Cori. Talk to Alex Dunn (ardunn@lbl.gov) for help if you have trouble.
The Cori queue system can be unreasonably slow when submitting many (e.g., hundreds, thousands) of small (e.g., single node or 2 nodes) jobs with qos-normal priority on Haswell. In practice, we have found that the Cori job scheduler will give your jobs low throughput if you have many jobs in queue, and you will often only be able to run 5-30 jobs at a time, while the rest wait in queue for far longer than originally expected (e.g., weeks). While there is no easy way to increase your queue submission rate (AFAIK), you can use FireWorks job-packing to “trick” Cori’s SLURM scheduler into running many jobs in serial on many parallel compute nodes with a single queue submission, vastly increasing throughput.
You can use job packing with the “multi” option to rlaunch. This command launches N parallel python processes on the Cori scheduling node, each which runs a job using M compute nodes.
The steps to job packing are: 1. Edit your my_qadapter.yaml file to reserve N * M nodes for each submission. For example, if each of your jobs takes M = 2 nodes, and you want a N = 10 x speedup, reserve 20 nodes per queue submission. 2. Change your rlaunch command to:
To have each FireWorks process run as many jobs as possible in serial before the walltime, use the --nlaunches 0 option. To prevent FireWorks from submitting jobs with little walltime left (causing jobs to frequently get stuck as “RUNNING”), set the --timeout option. Make sure --timeout is set so that even a long running job submitted at the end of your allocation will not run over your walltime limit. Your my_qadapter.yaml should then have something similar to the following lines:
Typically, setting N <= 10 will give you a good N-times speedup with no problems. There are no guarantees, however, when N > 10-20. Use N > 50 at your own risk!
By default, premium QOS access is turned off for everyone in the group. When there is a scientific emergency (for example, you need to complete a calculation ASAP for a meeting with collaborators the next day), premium QOS can be utilized. In such cases, please contact Howard (hli98@lbl.gov or on Slack) to request premium QOS access. The access will be then turned off automatically after three weeks or after the emergency has been dealt with.