25 February 2025 15:50 - 16:10
Moving beyond pretraining: When and how to fine-tune language models
Base models are powerful but they don’t solve every problem out of the box.
This session breaks down when fine-tuning is actually worth the effort, and when simpler approaches perform just as well.
Nikhil will walk through a practical decision framework for choosing between prompt engineering, RAG, fine-tuning, or building a model from scratch grounded in real failure cases teams run into in production.
He’ll compare full fine-tuning and parameter-efficient methods (PEFT), unpacking the trade-offs across cost, compute, and data requirements.
The session also explores how data quality outweighs sheer volume, how to create high-signal training examples, and where synthetic data can (and can’t) help.
Key takeaways
→ How to decide when fine-tuning is the right tool—and when it isn’t.
→ Practical trade-offs between prompt engineering, RAG, PEFT, and full fine-tuning.
→ What actually matters in training data quality and evaluation.