
The IT Division’s ScienceIT team is offering Lawrencium Training for the Berkeley Lab community. Along with the virtual training, there are some additional system updates, including new infrastructure updates and more. Developed and managed by the ScienceIT team, the Lawrencium cluster is a powerful high-performance computing resource at Berkeley Lab, consisting of more than 1,300 compute nodes with approximately 30,000 computational cores. Available to all Lab researchers, Lawrencium enables scientists to tackle some of today’s most complex scientific challenges.
Upcoming Training: August 11
Mark your calendars for an exclusive Lawrencium Training Session on Monday, August 11, from 10:30 am to 12:00 pm (virtual). This hands-on workshop will cover:
- Account setup and cluster access
- Lawrencium architecture and capabilities
- AI coding using Ollama LLM models with Open OnDemand
- Generative AI coding workflows
Whether you’re new to Lawrencium or looking to expand your skills, this training is a great opportunity to learn directly from our experts.
[Register now] to secure your spot and receive the Zoom link.
AMD LR8 Partition
The ScienceIT team recently unveiled the next-generation LR8 partition. This system features fifty-one nodes, each with dual AMD EPYC 9534 processors and 768GB RAM, where forty-two are PI-owned, and nine are institutional nodes contributed by the IT Division. This state-of-the-art infrastructure delivers exceptional performance for compute-intensive workloads and scientific simulations.
To access LR8, please update your Slurm job submission script with the following:
#SBATCH –partition=lr8
#SBATCH –qos=lr8_normal
As a core-based partition, LR8 provides flexible and efficient resource allocation so you can tailor jobs to your CPU core requirements.
An additional twenty institutional AMD nodes will be added soon—stay tuned for even more capacity!
User and Project Accounts
On the MyLRC portal, the user agreement form has been updated to include a new requirement for a “Valid Project ID,” so please take a moment to review the revised terms if you haven’t already. We’ve also added an Announcements section to the portal’s welcome page to help keep you informed about the latest updates. As the fiscal year comes to a close, a reminder that PI computing allowances (PCA) will be renewed in September for FY2026. Until then, if you use up your free service units, you can request a recharge account through the MyLRC portal to continue your work without interruption.
AI Resources on Lawrencium
Lawrencium Cluster users should also try the following free AI tools offered by the ScienceIT team:
- CBorg AI Portal: Explore AI services at cborg.lbl.gov.
- Ollama LLM Models: Now available through JupyterLab and VSCode via Open OnDemand.
Additional Lawrencium Cluster Updates
- Additional H100 Node on ES1 GPU Partition: We’ve added an 8x H100 node to the ES1 partition, expanding GPU capacity to a total of four 8x H100 GPUs with 56 CPU cores (dual Intel Xeon Platinum 8480) and 1TB RAM. Each H100 offers 80GB HBM2e memory and NDR fabric connectivity—ideal for AI/ML and HPC workloads.
- Introducing the ES0 GPU Partition: Free GPU Computing: The new ES0 partition offers 20 nodes, each with 4x NVIDIA RTX 2080 Ti GPUs, free for all Lawrencium users. With 11GB memory and 4,352 CUDA cores per card, ES0 is perfect for deep learning, molecular dynamics, and GPU programming. Use it for learning, prototyping, or early‑stage research—just add #SBATCH –partition=es0 to your scripts and start computing at no cost.
- Free CPU Computing with LR4 Partition: The LR4 partition (Intel Xeon E5-2670v3, 24 cores/node) is available at no charge, making it a great option for code development, data analysis, and preliminary research.
- CM2 Partition Open to All Users The CM2 partition, featuring 12 nodes with 64‑core AMD EPYC 7452 processors, is now open to the full Lawrencium community via institutional node sharing—offering strong performance for parallel computing.
The ScienceIT team is always available to help answer any questions and guide you through your use of the Lawrencium cluster. Here are some of the ways to reach out for help:
- AskUS ticketing system: Submit tickets hpcshelp@lbl.gov for HPC support.
- Consulting: scienceit@lbl.gov
- Office Hours: Every Wednesday, 10:30 am–12:00 pm over Zoom
- Status Updates: Follow cluster status via the status page or email notifications.
Documentation: Access detailed guides and tutorials on ScienceIT documentation pages.