15 April 2026 13:30 - 13:50
Defining control boundaries in autonomous systems
Once AI systems are allowed to take actions, the hardest problem isn’t deciding what they can do - it’s deciding how to stop them when things go wrong.
Engineering teams quickly encounter failure modes that weren’t visible in development: cascading actions across tools, unclear ownership when decisions have real-world impact, and human-in-the-loop mechanisms that either slow systems to a crawl or fail to prevent incidents. Without explicit control boundaries, autonomy turns into operational risk.
This session focuses on how teams design control into action-capable AI systems from the start.
Key takeaways:
→ How teams define and enforce decision authority, including where autonomy ends and human intervention begins.
→ Practical patterns for runtime control, including action gating, escalation paths, and state isolation.
→ Strategies for containing failures and rolling back agent behavior before issues cascade system-wide.