The VULCAN cluster is part of the LBNL Supercluster and shares the same Supercluster infrastructure. This includes the system management software, software module farm, scheduler, storage, and backend network management.
Login and Data Transfer:
VULCAN uses One Time Password (OTP) for login authentication for all the services provided below. Please also refer to the Data Transfer page for additional information.
- Login server: lrc-login.lbl.gov
- DATA transfer server: lrc-xfer.lbl.gov
- Globus Online endpoint: lbnl#lrc
Hardware Configuration:
VULCAN cluster has a mixture of different CPU architectures and memory configurations so please be aware of them and choose them wisely along with other scheduler configurations listed in the Scheduler Configuration section. Compute nodes are connected with each other through multiple high performance QLogic 40 Gbps QDR Infiniband switches, excepted nodes designated below.
PARTITION | NODES | NODE LIST | CPU | CORES | MEMORY | GPU | INFINIBAND | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vulcan | 246 |
|
|
|
|
|
|||||||||||
vulcan_gpu | 2 | n0[246-247].vulcan0 | INTEL XEON E5-2650 | 16 | 64 GB | 4x Tesla K20Xm |
|||||||||||
vulcan_c20 | 28 | n0[248-275].vulcan0 | INTEL XEON E5-2670 v2 | 20 | 64 GB | QDR | |||||||||||
vulcan_gpu | 2 | n0[324-325].vulcan0 | INTEL XEON E5-2623 v3 | 8 | 64 GB | 4x K80 | QDR |
Storage and Backup:
VULCAN cluster users are entitled to access the following storage systems so please get familiar with them.
NAME | LOCATION | QUOTA | BACKUP | ALLOCATION | DESCRIPTION |
---|---|---|---|---|---|
HOME | /global/home/users/$USER | 12GB | Yes | Per User | HOME directory for permanent data storage |
SCRATCH | /global/scratch/$USER | none | No | Per User | SCRATCH directory with Lustre high performance parallel file system over Ethernet |
MOTEL | /clusterfs/vulcan/motel/$USER | none | No | Per User | Long-term storage of bulk data |
MOTEL2 | /clusterfs/vulcan/motel2/$USER | none | No | Per User | Long-term storage of bulk data |
PSCRATCH | /clusterfs/vulcan/pscratch/$USER | none | No | Per User | SCRATCH directory with Lustre high performance parallel file system over Infiniband |
NOTE: HOME, MOTEL, and MOTEL2 directories are located on a highly reliable enterprise level BlueArc storage device. Since this appliance also provides storage for many other mission critical file systems, and it is not designed for high performance applications, running large I/O dependent jobs on these file systems could greatly degrade the performance of all the file systems that are hosted on this device and affect hundreds of users, thus this behavior is explicitly prohibited. HPCS reserves the right to kill these jobs without notification once discovered. Jobs that have I/O requirement should use the SCRATCH or PSCRATCH file system which are designed specifically for that purpose.
Scheduler Configuration:
VULCAN cluster uses SLURM as the scheduler to manage jobs on the cluster. To use the VULCAN resource one of the partitions “vulcan”, “vulcan_gpu”, and “vulcan_c20” must be used (e.g., “–partition=vulcan”) along with account “vulcan” (“–account=vulcan”). Currently there is no special limitation introduced to the VULCAN cluster thus no QoS configuration is required to use the VULCAN resources (a default “normal” QoS will be applied automatically). If a debug job is desired the “vulcan_debug” QoS should be specified (“–qos=vulcan_debug”) so that the scheduler could adjust job priority accordingly. Please note only “vulcan” partition is configured to run debug jobs. A standard fair-share policy with a decay half life value of 14 days (2 weeks) is enforced.
PARTITION | ACCOUNT | NODES | NODE LIST | NODE FEATURES | SHARED | QOS | QOS LIMIT | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
vulcan | vulcan | 246 |
|
|
Exclusive |
|
|
||||||||
vulcan_gpu | vulcan | 2 | n0[246-247] .vulcan0 |
vulcan vulcan_c16 vulcan_m64 vulcan_k20 |
Exclusive | vulcan_gpu | 2 nodes max 1:00:00 wallclock limit |
||||||||
vulcan_c20 | vulcan | 28 | n0[248-275] .vulcan0 |
vulcan vulcan_c20 vulcan_m64 |
Exclusive | normal | no limit | ||||||||
vulcan_gpu | vulcan | 2 | n0[324-325] .vulcan0 |
vulcan vulcan_c8 vulcan_m64 vulcan_k80 |
Exclusive | vulcan_gpu | 2 nodes max 1:00:00 wallclock limit |
Software Configuration:
VULCAN uses Software Module Farm and Environment Modules to manage the cluster-wide software installation.
Cluster Status:
Please visit here for the live status of VULCAN cluster.
Additional Information:
Please us tickets hpcshelp@lbl.gov or send email to ScienceIT@lbl.gov for any inquiries or service requests.