
Lawrencium is a x86 Intel processor general purpose cluster that is suitable for running a wide diversity of scientific applications. The system is named after the chemical element 103 which was discovered at Lawrence Berkeley National Laboratory in 1958 and in honor of Ernest Orlando Lawrence, the inventor of the cyclotron. The original Lawrencium system was built as a 200-node cluster and debuted as #500 on the Top500 supercomputing list in Nov 2008.
Today Lawrencium consists of multiple generations of compute nodes with the Lr7 partition being the most recent addition and the Lr3 partition the oldest still in production. Below are the current partitions in production.
Lr7 is a 100 Dell PowerEdge C6520 servers, each of which is equipped with two next-gen 28-core Intel Xeon Gold 6330 processors and 256 GB of memory. All LR7 compute nodes are interconnected with the fast Mellanox HDR InfiniBand fabric at 100Gbps. The core-based scheduling configuration allows effective use of 56 cores each node.
Lr6 is a 260-node cluster partition consisting of 132 ea. 32-core Skylake processor compute nodes and 128 ea. 40-core Cascade Lake processor compute nodes connected with a Mellanox EDR infiniband fabric. Each server blade is equipped with two Skylake or Cascade Lake processors on a single board configured as an SMP unit. Nodes with either 96GB or 192Gb memory are available to users.
Lr5 consists of 192 ea. 28-core Broadwell compute nodes connected with a Mellanox FDR Infiniband fabric. Each node is a Dell PowerEdge C6320 server blade equipped with two Xeon Intel 14-core Broadwell processors (24 cores in all) on a single board configured as an SMP unit. The core frequency is 2.4GHz and supports 16 floating-point operations per clock period with a peak performance 1,075 GFLOPS/node. Each node contains 64GB 2400Mhz memory.
Lr4 consists of 144 ea. 24-core Haswell compute nodes connected with a Mellanox FDR Infiniband fabric. Each node is a Dell PowerEdge C6320 server blade equipped with two Xeon Intel 12-core Haswell processors (24 cores in all) on a single board configured as an SMP unit. The core frequency is 2.3GHz and supports 16 floating-point operations per clock period with a peak performance 883 GFLOPS/node. Each node contains 64GB 2133Mhz memory.
Lr3 is a 318-node cluster partition consisting of 282 16-core compute nodes and 36 20-core nodes connected with a Mellanox FDR Infiniband fabric. Each node is a Dell PowerEdge C6220 server blade equipped with two Xeon Intel 8-core Sandybridge processors (16 cores in all) on a single board configured as an SMP unit. The core frequency is 2.6GHz and supports 8 floating-point operations per clock period with a peak performance of 20.8 GFLOPS/core or 322GFLOPS/node. Each node contains 64GB 1600Mhz memory. The later 20-core C6220 nodes are similar, but use the 10-core IvyBridge processors running at 2.5Ghz and have 64Gb of 1866Mhz memory.