Partner with us

Secure your pass

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Nikhil
pinnaparaju
Machine Learning Engineer
Stability AI
Nikhil is a Machine Learning Research Engineer at Stability AI, working on building and improving large-scale language models. Previously, he was an Applied Scientist at Microsoft India (R&D), focusing on personalized autosuggestion systems. He has published research at leading international conferences including EMNLP, WWW, and PAKDD, and was part of the winning team at the EMNLP 2020 LongSumm and LaySum tasks. Nikhil holds Bachelor’s and Master’s degrees from IIIT Hyderabad, with a primary focus on NLP and applied machine learning.
Button
25 February 2025 15:50 - 16:10
Moving beyond pretraining: When and how to fine-tune language models
Base models are powerful but they don’t solve every problem out of the box. This session breaks down when fine-tuning is actually worth the effort, and when simpler approaches perform just as well. Nikhil will walk through a practical decision framework for choosing between prompt engineering, RAG, fine-tuning, or building a model from scratch grounded in real failure cases teams run into in production. He’ll compare full fine-tuning and parameter-efficient methods (PEFT), unpacking the trade-offs across cost, compute, and data requirements. The session also explores how data quality outweighs sheer volume, how to create high-signal training examples, and where synthetic data can (and can’t) help. Key takeaways → How to decide when fine-tuning is the right tool—and when it isn’t. → Practical trade-offs between prompt engineering, RAG, PEFT, and full fine-tuning. → What actually matters in training data quality and evaluation.