
ScienceIT’s work was referenced in NVIDIA’s post on the ALS deployment of the Accelerator Assistant, a large language model (LLM)-driven system to keep X-ray research on track. The article references not only CBorg, but Ollama, an LLM service that has been integrated into the Lawrencium high-performance computing (HPC) cluster at Berkeley Lab and managed by ScienceIT.
The Accelerator Assistant that ALS is using is powered by an NVIDIA H100 GPU harnessing CUDA for accelerated inference has dramatically reduced setup time—by up to 100× for multistage physics experiments—and offers a transparent, secure blueprint for applying AI to manage complex scientific infrastructure while maintaining human oversight.