Sign In
Register

Partnership opportunities

Save $100 on your pass

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Naila
Khan
Senior Engineering Manager
Google
Naila is a seasoned engineering leader with over 20 years of experience in designing and delivering large-scale distributed systems. As a Senior Engineering Manager at Google, she combines technical expertise with a thoughtful, people-centered leadership style to drive impact through customer-focused, execution-oriented engineering teams. With 5+ years of experience as a manager, Naila has a proven track record of building and motivating high-performing teams. She has successfully recruited talented engineers, defined clear team visions and charters, and guided organizations through the delivery of complex, scalable solutions in domains such as voice-based assistants (Alexa), image processing, and web GIS. Naila is skilled at aligning cross-functional teams—spanning product, business development, and technology—around a shared strategy, ensuring technical efforts align with the organization’s North Star. She champions agile practices to consistently meet delivery milestones and has mentored team members to foster their professional growth and career success. Known for her execution focus and ability to navigate ambiguity, Naila is a trusted leader who drives innovation and delivers impactful solutions.
Button
26 August 2026 16:30 - 17:00
Panel | Why do agent systems break in production? The gap between design and reality
Agent systems rarely break in the way the architecture diagram suggested they would. On paper, the workflow is clean: context comes in, the agent reasons, tools are called, actions happen, and the system improves. In production, inputs are messier, tool calls fail, context drifts, users behave unpredictably, and small errors compound across multi-step workflows. This closing panel looks at what actually breaks once agent systems move into real environments. From evaluation gaps and observability blind spots to orchestration failures, latency, cost, and recovery, the discussion will unpack why production exposes problems that design rarely anticipates and what teams are learning about building agent systems that can hold up under real usage. Key takeaways: → Where agent systems most often break once they move beyond controlled testing → Why evaluation, observability, and debugging become harder in multi-step workflows → What teams are doing to build more reliable, recoverable, production-ready agent systems