The BALDUR cluster is part of the LBNL Supercluster and shares the same Supercluster infrastructure. This includes the system management software, software module farm, scheduler, storage, and backend network management.
Login and Data Transfer:
- Login server: lrc-login.lbl.gov
- DATA transfer server: lrc-xfer.lbl.gov
- Globus Online endpoint: lbnl#lrc
Please refer to the following table for the current generation of BALDUR hardware configuration.
|baldur1||40||n00[00-39].baldur1||INTEL XEON X5550||8||24GB||QDR|
Storage and Backup:
BALDUR cluster users are entitled to access the following storage systems so please get familiar with them.
|HOME||/global/home/users/$USER||12GB||Yes||Per User||HOME directory for permanent data
|GROUP-SW||/global/home/groups-sw/$GROUP||200GB||Yes||Per Group||GROUP directory for software and
data sharing with backup
|GROUP||/global/home/groups/$GROUP||400GB||No||Per Group||GROUP directory for data sharing
|SCRATCH||/global/scratch/$USER||none||No||Per User||SCRATCH directory with Lustre high
performance parallel file system
NOTE: HOME, GROUP, and GROUP-SW directories are located on a highly reliable enterprise level BlueArc storage device. Since this appliance also provides storage for many other mission critical file systems, and it is not designed for high performance applications, running large I/O dependent jobs on these file systems could greatly degrade the performance of all the file systems that are hosted on this device and affect hundreds of users, thus this behavior is explicitly prohibited. HPCS reserves the right to kill these jobs without notification once discovered. Jobs that have I/O requirement should use the SCRATCH file system which is designed specifically for that purpose.
BALDUR cluster uses SLURM as the scheduler to manage jobs on the cluster. To use the BALDUR resource the partition “baldur” must be used (“–partition=baldur”) along with account “baldur” (“–account=baldur”). Currently there is no special limitation introduced to the “baldur” partition thus no QoS configuration is required to use the BALDUR resources (a default “normal” QoS will be applied automatically). A standard fair-share policy with a decay half life value of 14 days (2 weeks) is enforced.
|PARTITION||ACCOUNT||NODES||NODE LIST||NODE FEATURES||SHARED||QOS||QOS LIMIT|
BALDUR uses Software Module Farm and Environment Modules to manage the cluster wide software installation.
Please visit here for the live status of BALDUR cluster.