15 April 2026 14:00 - 14:30
Tracing decisions across prompts, data, and tools
AI bugs are hard to reproduce because behavior is shaped by more than just input. Prompt state, retrieved context, tool availability, and execution order all influence outcomes, often in ways that aren’t visible once a request has completed.
This session focuses on how teams trace and diagnose AI behavior in production. We’ll discuss how engineers follow decision paths across prompts, data retrieval, and tool calls, reconstruct execution context after the fact, and isolate the source of unexpected behavior.
The emphasis is on practical techniques for debugging systems where behavior is probabilistic, stateful, and dependent on runtime conditions rather than deterministic code paths.