This documentation provides directions to log in and use the rocky8 testbed provided for the Lawrencium cluster. Currently, users can run jobs on Lr3 CPU and es1_r8 GPU partitions that are part of this testbed.
Login: rocky8-login.lbl.gov
Login nodes: n0000, n0001
Software Module Farm:
For the rocky-8 release, scientific applications, libraries and utilities can be accessed via Lmod. Users can load the application of their choice using the module command.
The Rocky 8 software module farm looks as shown below. You can see the packages on the software module farm using command “module avail”.
[spsoni@n0001 ~]$ module avail
--------- /global/software/rocky-8.x86_64/modules/compilers ---------
gcc/10.5.0 intel-oneapi-compilers/2023.1.0 nvhpc/23.9
gcc/11.4.0 (D) llvm/17.0.4
----------- /global/software/rocky-8.x86_64/modules/tools -----------
automake/1.16.5 imagemagick/7.1.1-11 qt/5.15.11
awscli/1.29.41 leveldb/1.23 rclone/1.63.1
bazel/6.1.1 lmdb/0.9.31 snappy/1.1.10
cmake/3.27.7 m4/1.4.19 spack/v0.21.1
code-server/4.12.0 matlab/r2022a swig/4.1.1
eigen/3.4.0 mercurial/6.4.5 tcl/8.6.12
emacs/29.1 nano/7.2 tmux/3.3a
ffmpeg/6.0 ninja/1.11.1 unixodbc/2.3.4
gdal/3.7.3 parallel/20220522 valgrind/3.20.0
glog/0.6.0 proj/9.2.1 vim/9.0.0045
gmake/4.4.1 protobuf/3.24.3
----------- /global/software/rocky-8.x86_64/modules/langs -----------
anaconda3/2024.02-1
julia/1.10.2-11.4 (L)
lua/5.3.6-gcc-11.4.0
openjdk/11.0.20.1_1-gcc-11.4.0
python/3.10.12-gcc-11.4.0
python/3.11.6-gcc-11.4.0 (D)
r/4.3.0-gcc-11.4.0
rust/1.70.0-gcc-11.4.0
----------- /global/software/rocky-8.x86_64/modules/apps ------------
ml/pytorch/2.0.1 ml/tensorflow/2.14.0
ml/pytorch/2.2.2 (D) ml/tensorflow/2.15.0 (D)
The packages under the compiler tree are arranged hierarchically. That means those packages will be shown only after loading the compilers. For example, OpenMPI is installed using gcc compiles and thus can be visible after loading the gcc compiler.
[spsoni@n0001 ~]$ module load gcc/11.4.0
[spsoni@n0001 ~]$ module av
-------- /global/software/rocky-8.x86_64/modules/gcc/11.4.0 ---------
antlr/2.7.7 netlib-lapack/3.11.0
cuda/11.8.0 openblas/0.3.24
cuda/12.2.1 (D) openmpi/4.1.3
cudnn/8.7.0.84-11.8 openmpi/4.1.6 (D)
cudnn/8.9.0-12.2.1 (D) rust/1.70.0 (D)
gsl/2.7.1 ucx/1.14.1
intel-oneapi-tbb/2021.10.0 udunits/2.2.28
The easier way to find the module you are looking for is the “module spider” command. The following output shows which versions of OpenMPI are available.
[spsoni@n0001 ~]$ module spider openmpi
-----------------------------------------------------------------
openmpi:
-----------------------------------------------------------------
Versions:
openmpi/4.1.3
openmpi/4.1.6
-----------------------------------------------------------------
To get the instructions on loading the module, use the command “module spider openmpi/4.1.3”
[spsoni@n0001 ~]$ module spider openmpi/4.1.3
-----------------------------------------------------------------
openmpi: openmpi/4.1.3
-----------------------------------------------------------------
You will need to load all module(s) on any one of the lines below
before the "openmpi/4.1.3" module is available to load.
gcc/10.5.0
gcc/11.4.0
According to the instructions, version 4.1.3 is installed using gcc compiler versions 10.5.0 and 11.4.0. You can select the compiler of your choice to load the opempi/4.1.3.
[spsoni@n0001 ~]$ module load gcc/11.4.0 openmpi/4.1.3
[spsoni@n0001 ~]$ module list
Currently Loaded Modules:
1) gcc/11.4.0 2) ucx/1.14.1 3) openmpi/4.1.3
Similarly, some packages require OpenMPI at the building time, and those packages are arranged under the OpenMPI hierarchy, such as Gromacs.
[spsoni@n0001 ~]$ module spider gromacs
-----------------------------------------------------------------
gromacs: gromacs/2023.3
-----------------------------------------------------------------
You will need to load all module(s) on any one of the lines below
before the "gromacs/2023.3" module is available to load.
gcc/10.5.0 openmpi/4.1.3
gcc/10.5.0 openmpi/4.1.6
gcc/11.4.0 openmpi/4.1.3
gcc/11.4.0 openmpi/4.1.6
Help:
GROMACS is a molecular dynamics package primarily designed for
simulations of proteins, lipids and nucleic acids.
[spsoni@n0001 ~]$ module load gcc/11.4.0 openmpi/4.1.3 gromacs/2023.3
[spsoni@n0001 ~]$ module list
Currently Loaded Modules:
1) gcc/11.4.0 5) intel-oneapi-tbb/2021.10.0
2) ucx/1.14.1 6) intel-oneapi-mkl/2023.2.0
3) openmpi/4.1.3 7) netlib-lapack/3.11.0
4) fftw/3.3.10 8) gromacs/2023.3
Python Packages: Anaconda3 with Python 3.11.7 is available on the module farm. Additionally, two other versions of Python with minimal Python packages are provided, which include commonly used Python packages mpi4py, matplotlib, scipy, numpy, h5py, netcdf4, pandas, geopandas, ipython, and virtualenv.
R packages: R with version 4.3.0 is available to users, and some commonly used r-packages are already installed in base R which includes ….
Job Submission
Rocky8 testbed partitions:
CPU partition: lr3
GPU partition: es1_r8
Job submission QoSs:
CPU: lr_debug (max walltime=3 hours)
GPU: es_debug (max walltime=3 hours)
Make sure the right partition and QoS are used in your script.
Slurm Submission Script Example:
CPU:
#!/bin/bash
# Job name:
#SBATCH --job-name=test
#
# Account:
#SBATCH --account=account_name
#
# Partition:
#SBATCH --partition=lr3
#
# QoS :
#SBATCH --qos=lr_bebug
# Number of MPI tasks needed for use case (example):
#SBATCH --ntasks=40
#
# Processors per task:
#SBATCH --cpus-per-task=1
#
# Wall clock limit(maximum 3 hours):
#SBATCH --time=00:00:30
#
## Command(s) to run (example):
module load gcc openmpi
mpirun ./a.out
GPU:
#!/bin/bash
# Job name:
#SBATCH --job-name=test
#
# Account:
#SBATCH --account=account_name
#
# Partition:
#SBATCH --partition=es1
#
# Number of nodes:
#SBATCH --nodes=1
#
# Number of tasks (one for each GPU desired for use case) (example):
#SBATCH --ntasks=2
#
# Processors per task (please always specify the total number of processors twice the number of GPUs):
#SBATCH --cpus-per-task=2
#
#Number of GPUs, this can be in the format of "gpu:[1-4]"
#SBATCH --gres=gpu:1
#
# Wall clock limit (maximum 3 hours):
#SBATCH --time=1:00:00
#
## Command(s) to run (example):
./a.out
Building Packages
You must rebuild your packages to work on the Rocky 8 platform. If your packages require a compiler, OpenMPI, and some libraries then you will first find those packages using “module spider” command and then load it using module load command.
Help
You may send us an email at hpcshelp@lbl.gov, and that will automatically open a ticket in our ticketing system.