An overview of HPC User Documentation
There are three primary ways/projects to obtain access to Lawrencium:
- PI Computing Allowance (PCA): free 300K SUs annual renewable
- Condo: purchase and contribute Condo nodes to the Lawrencium cluster
- Recharge: charged at a minimal recharge rate, roughly at $0.01/SU
You must have a user account to gain access to the Lawrencium cluster. Request a User Account and Submit User Agreement
You’ll need to generate and enter a one-time password each time that you log in. You’ll use an application called Google Authenticator to generate these passwords, which you can install and run on your smartphone and/or tablet. For instructions on setting up and using Google Authenticator, see Multi-Factor Authentication. Once you have your PIN+OTP set up you can login to cluster using a ssh client of your choice or Linux/Mac terminal as ssh $email@example.com. You will be prompted to enter your password. Enter your PIN+OTP without any spaces. For example if your pin is 0123 and OTP is 456789, then you will type it as 0123456789. Note that the characters won’t appear on the screen.
Data Movement and Storage
To transfer data from other computers into – or out of – your various storage directories, you can use protocols and tools like SCP, STFP, FTPS, and Rsync. If you’re transferring lots of data, the web-based Globus Connect tool is typically your best choice: it can perform fast, reliable, unattended transfers. The LRC supercluster’s dedicated Data Transfer Node is lrc-xfer.lbl.gov. For more information on getting your data onto and off of Lawrencium, please see Data Transfer.
Software Module Farm and Environment Modules
Lots of software packages and tools are already built as Software Module Farm and provided for your use, on your cluster. You can list these and load/unload them via Environment Module commands. Accessing Software and Using Environment Modules.
When you log into a cluster, you’ll land on one of several login nodes. Here you can edit scripts, compile programs etc. However, you should not be running any applications or tasks on the login nodes, which are shared with other cluster users. Instead, use the SLURM job scheduler to submit jobs that will be run on one or more of the cluster’s many compute nodes. How to Submitting Jobs and Monitoring Jobs
Contact us via firstname.lastname@example.org