Partnerships

Get your ticket

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Alex
Gatz
Staff Security Architect
Ziosec
As Staff Security Architect at ZioSec, Alex led the design and execution of the company’s Adversarial Testing Platform for AI Agents, a system that continuously orchestrates hundreds, soon thousands, of automated controlled attacks on AI agents to validate the effectiveness of their safety guardrails. Alex’s research has made him one of the foremost practitioners in adversarial testing of generative and agentic AI. He was the principal architect behind ZioSec’s red-team automation framework, which operationalizes what enterprises have long struggled to do: continuously measure whether AI systems remain safe and compliant once deployed. His work bridges the technical rigor of offensive security with the practical demands of governance, risk, and compliance (GRC). Prior to ZioSec, Alex was Senior Security Researcher at ThreatX, a next-generation firewall as a service where he developed multiple patents.
Button
20 November 2025 09:30 - 10:00
Break your agent before someone else does: A builder's guide to AI security
You've built an AI agent. It works beautifully in testing. But have you tried to break it? The barrier to building AI agents has never been lower. LangChain, n8n, OpenAI's agent builder, MCP tools, or even ChatGPT-generated Python code can get you from idea to deployment in hours. But this accessibility comes with a hidden cost: most AI builders are shipping agents with critical security vulnerabilities they don't even know exist. In this hands-on session, we'll attack a live AI agent architecture together, the same setup many developers are using in production. You'll see real exploits against LangChain-based agents, MCP tools, and common integration patterns, and learn exactly how attackers think about your code.