
AI is rapidly changing how we process information, solve problems, and advance scientific discovery. One of the key challenges in this space is few-shot learning, which allows AI models to learn new tasks with only a small number of training examples. This approach is especially valuable in research areas where labeled data is limited or costly to produce.
Members of Berkeley Lab’s Science IT team, Fengchen Liu and Gary Jung, recently presented their paper Exploring Few-Shot Learning: Fine-Tuning vs. In-Context Learning and Parameter-Efficient Adaptations at the ACM Practice and Experience in Advanced Research Computing (PEARC) conference. Their work takes a close look at three main methods for few-shot learning—fine-tuning, in-context learning, and parameter-efficient tuning through LoRA (Low-Rank Adaptation)—and compares how well they perform across different tasks, including classification, question answering, and summarization.
Breaking Down the Approaches
The first method, fine-tuning, means retraining all the parts of a pre-trained model so it can handle a new task. It usually gives strong results but requires a lot of computing power.
The second approach, in-context learning, asks the model to learn by looking at examples written directly into the prompt, without changing the model itself. While this method is efficient, the study showed it did not perform as well as the other options.
Finally, the team studied LoRA, or Low-Rank Adaptation, a newer technique that makes small, targeted updates to a model. LoRA offers a good middle ground—strong results with far fewer computing resources and storage requirements than full fine-tuning.
Why This Matters
This work is important because it shows how researchers can choose the right approach depending on their needs and resources. “Our study shows that with smart choices, researchers don’t always need the most powerful systems to get meaningful results,” said Liu. “Methods like LoRA open the door to broader use of AI in science, especially in resource-constrained environments.”
Presented at PEARC
The paper was presented at PEARC25, a major conference sponsored by the Association for Computing Machinery (ACM). PEARC brings together experts from academia, government, and industry to share innovations in advanced research computing, with topics ranging from diversity and workforce development to cutting-edge applications of AI and high-performance computing.
By comparing these different methods of few-shot learning, the Science IT team at Berkeley Lab is helping the scientific community better understand the strengths and trade-offs of today’s most promising AI approaches.