How to get a Project Account on Lawrencium
The Lawrencium cluster is open to all Berkeley Lab researchers needing access to high performance computing. Research collaborations are also welcome provided that there is a LBNL PI.
LBNL PIs wanting to obtain access to Lawrencium for their research project will need to complete the Requirements Survey, giving the details of the research activity along with a list of anticipated users. A unique group name will be created for the project and associated users. This group name will be used to setup allocations and report usage.
There are three primary ways to obtain access to Lawrencium:
1. LBNL PIs: requesting a block of no-cost computing time via a PI Computing Allowance (PCA). This option is currently offered to all eligible Berkeley Lab PIs. For additional details on please see PI Computing Allowance.
2. Condo projects: purchasing and contributing Condo nodes to the cluster. This option is open to any Berkeley Lab staff, and provides ongoing, priority access to you and your research affiliates who are members of the Condo. For details, please see Condo Cluster Service.
3. Recharge use: Berkeley lab researchers who want to use Lawrencium cluster at a minimal recharge rate, roughly at $0.01/SU.
To request a PCA, Condo or Recharge project on Lawrencium, please fill out the Lawrencium access request form. Make sure choosing the desired project type on the form.
If your projectID associated with project accounts on Lawrencium expires or becomes invalid, you can request of changing your projectID using this form.
Computer Time: We are currently not using an allocation process to allocate compute time to individual projects. Instead, usage and priority will be regulated by a scheduler policy intended to provide a level of fairness across users. If needed, a committee consisting of scientific division representatives will review the need for allocations if demand exceeds supply.
Cost: At this time, there is a charge of $0.01/SU or Service Unit for compute cycles. Newer hardware is charged at the rate of 1 SU per 1 processor core-hr. Compute cycles on older hardware is charged at the rate of less than 1 SU per core-hour to account for the differences in compute performance. Current the charges are as follows:
|System||Rate||Compute Node Description|
|LR6||1 SU per core||Intel Xeon Gold 6130 – 32-core nodes|
|LR5||0.75 SU per core-hr||Intel Broadwell 28-core nodes|
|LR4||0.75 SU per core-hr||Intel Haswell 24-core nodes|
|LR3||0.50 SU per core-hr||Intel SandyBridge 16-core and IvyBridge 20-core nodes|
|CF1||0.40 SU per core-hr||Intel Xeon PHI – 64-core nodes|
|ES1||1 SU per core||Intel Xeon ES – 8-core GPU nodes (1080TI and v100)|
|CM1||1 SU per core||AMD EPYC – 48-core nodes|
There is a nominal charge of $25/mo/user for the use of Lawrencium and UX8; to cover the costs of home directory storage and backups. Users should note that their jobs are allocated resources by the node so, for example, a job running on a 12-core LR6 node will be charged 12 SUs/hr for the use of that node. Similarly, a job running on an 20-core LR5 node will be charged 20 SUs x 0.75 SU/hr = 15 SUs/hr. Account fees and cpu usage will appear as LRCACT and LRCCPU in the LBL Cost Browser.
Storage: Home directory space will have a quota set at 10GB per user. Users may also use the /clusterfs/lawrencium shared filesystem which does not have a quota; this file system is intended for short term use and should be considered volatile. Backups are not performed on this file system. Data is subject to periodic purge policy wherein any files which are not accessed with in the last 14 days will be deleted. Users should make sure to have a back up of these files to some external permanent storage as soon as they are generated on the cluster.
Lustre: Lustre parallel file system is also now available for Lawrencium cluster users. The file system is built with 4 OSS and 15 OST servers with a capacity of 1.8PB. The default striping is set to 4 OSTs with strip size of 1 MB. All the Lawrencium cluster users will receive a directory created under /clusterfs/lawrencium with the above default stripe values set. This is a scratch file system, so its mainly intended for storing large input or output files for running jobs and for all the parallel I/O needs on the Lawrencium cluster. This file system is intended for short term use and should be considered volatile. Backups are not performed on this file system. Data in scratch is subject to periodic purge policy wherein any files which are not accessed with in the last 14 days will be deleted. Users should make sure to have a back up of these files to some external permanent storage as soon as they are generated on the cluster.
Please acknowledge Lawrencium in your publications. A sample statement is:
This research used the Lawrencium computational cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory (Supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231)