Partnership opportunities

Register

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Ashay
Satav
Director, Product Management
eBay
Ashay Satav is a seasoned product leader with nearly 18 years of experience across SaaS, e-commerce, and fintech. He has a strong track record of delivering innovative products in AI, APIs, and platform ecosystems, creating meaningful customer value at companies including eBay, Intuit, and Blackhawk Network. As Director of Product Management at eBay, Ashay leads initiatives using generative AI to supercharge listing tools, helping small businesses publish inventory faster and with less friction. His team is also modernizing eBay’s public API framework—simplifying developer integration and reimagining the global listing platform with a modern, resilient tech stack that improves seller experience and listing conversion. Previously at Intuit, Ashay led the external developer experience and API ecosystem for QuickBooks, enabling small businesses to operate more efficiently across connected products. At Blackhawk Network, he pioneered omni-commerce APIs and revamped core operations platforms—including orders, refunds, and chargebacks—to support strategic partnerships and native developer integrations for gift card offerings. Beyond his professional roles, Ashay is an active member of the Forbes Technology Council, IEEE, and **Product Development and Management Association board of Products That Count, is a Fellow at Hackathon Raptors, and regularly contributes to the product community through speaking, judging, and published business and research articles.
Button
15 April 2026 12:00 - 12:30
Panel | Evaluating autonomous agents: Closing the gap between tests and real-world behaviour
Evaluating autonomous agents is fundamentally harder than evaluating static models or prompt-based systems. Behavior unfolds over sequences of actions, interacts with tools and environments, and changes under real traffic in ways that are difficult to capture with offline tests alone. In this panel, engineers and system builders compare how they evaluate agent behavior in practice. The discussion will explore where traditional testing breaks down, how teams reason about trajectories rather than single outputs, and what signals matter most once agents are operating in dynamic, real-world environments. Expect candid perspectives on what works, what doesn’t, and where evaluation remains an open problem for agentic systems.