Partnership opportunities

Tickets

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Bryant
Linton
AI Safeguards, Policy & Labeling Specialist (Claude)
Anthropic
Bryant Linton is a Policy and Labeling Specialist at Anthropic, where he works on AI safeguards, ethics, and risk assessment for the Claude model. Previously, he led GenAI data operations at Meta, managing teams focused on content originality, impersonation detection, and LLM data governance. With experience spanning AI safety, data labeling, and applied privacy practices, Bryant brings a critical perspective to responsible AI deployment at scale.
Button
30 October 2025 15:20 - 15:40
Interactive panel | Addressing ethical challenges through explainability
This session explores how explainability techniques can help mitigate ethical risks in AI systems, from biased outputs to opaque decision-making. Panelists will share practical approaches to making models more transparent, accountable, and aligned with real-world impact. Key learnings: → The role of explainability in identifying and reducing bias in model behavior → Access practical tools and techniques to increase transparency in complex systems → Debate how explainability supports trust, compliance, and responsible AI deployment