cover

Theoretical Memory Efficiency Gains with LoRA for Single and Multi-GPU Settings

18 Jun 2025

Learn how LoRA improves memory efficiency for training large models on single and multi-GPU setups, with comparisons to full finetuning and FSDP.

cover

MetaMathQA: AI-Augmented Math Dataset with 395K Samples

18 Jun 2025

Explore how MetaMathQA uses GPT-3.5 to rephrase, verify, and augment 395K math reasoning questions with advanced AI reasoning techniques.

cover

LionW Outperforms AdamW in LoRA and Full Fine-Tuning Tasks

18 Jun 2025

LionW outperforms AdamW in both LoRA and full fine-tuning for code models, showing stronger results across learning rates in HumanEval and related tasks.

cover

How Effective Is LoRA Finetuning for Large Language Models?

17 Jun 2025

This study compares LoRA and full finetuning on code and math tasks, revealing trade-offs in performance, generalization, and hyperparameter sensitivity.

cover

LoRA's Limitations in Code and Math Tasks

17 Jun 2025

Explore how LoRA compares to full finetuning across tasks and domains. See what new studies reveal about its efficiency, tradeoffs, and performance gaps.

cover

How Module Type and Rank Impact LoRA’s Effectiveness in Model Training

17 Jun 2025

Explore why full finetuning captures high-rank perturbations better than LoRA and how to optimally configure LoRA for code and math tasks.

cover

Does LoRA Fine-Tuning Help AI Models Forget Less?

17 Jun 2025

LoRA fine-tuning helps LLMs learn new tasks with less forgetting and better output diversity compared to full fine-tuning.

cover

Over Time, LoRA Holds Up Better Than Full Finetuning

17 Jun 2025

LoRA forgets less than full finetuning on code and math benchmarks, showing stronger retention and slower degradation in AI model performance.

cover

LoRA Falls Short of Full Finetuning in Programming and Math Tasks

17 Jun 2025

LoRA underperforms full finetuning in code and math tasks, showing lower accuracy and sample efficiency across benchmarks like HumanEval and GSM8K.