09 October 2025 13:30 - 14:00
Live workshop | From biased benchmarks to trusted, explainable evaluation
This live workshop explores the limitations of current AI evaluation benchmarks, highlighting how hidden biases can distort model performance and trust.
Attendees will learn practical approaches to designing more transparent, explainable, and domain-relevant evaluation frameworks for responsible AI deployment.