05 June 2025 12:30 - 12:50
Panel discussion: Human Oversight and LLMs - How to Ensure Alignment, Safety, and Trust
Aligning large language models with specific objectives and use cases requires a robust fine-tuning pipeline that ensures both precision and efficiency.
Panelists will discuss the technical intricacies of building such pipelines, sharing insights into data curation, model adjustment strategies, and monitoring alignment, while addressing challenges like maintaining model generalization and mitigating risks associated with bias and drift.