Partnership opportunities

Save $200 on your pass

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Nishant Satya
Lakshmikanth
Senior Engineering Leader
LinkedIn
Nishant Lakshmikanth is a senior engineering leader at LinkedIn, where he leads Network Growth AI Infrastructure powering large-scale recommendation systems such as People You May Know across more than 1.3 billion members. His platforms generate over 17,000 new professional connections per minute and operate on LinkedIn’s 474-billion-edge Economic Graph. With over 12 years of experience across LinkedIn, Amazon Web Services, and Cisco, Nishant specializes in architecting modular and reliable AI systems that integrate large language models, graph neural networks, and multi-stage retrieval and ranking architectures at global scale. His work spans distributed training systems, high-throughput inference platforms, multi-tenancy frameworks, and cost-efficient AI infrastructure. He is a named inventor on seven patents in distributed systems and cloud infrastructure and has been invited to speak at leading international conferences on large-scale AI and distributed architectures. Nishant is deeply interested in advancing the fields of AI infrastructure, distributed systems, and cloud computing, with a focus on building scalable, production-grade systems that power real-world impact.
Button
15 April 2025 12:00 - 12:30
Panel | From single models to modular systems: Architecting reliable next generation AI
As teams move beyond simple “one LLM + prompt” prototypes, their stacks start to look more like systems: multiple models, agents, tools, data layers, and evaluation loops all stitched together. With that shift comes a new set of headaches unexpected behaviour at scale, fragile orchestration, unclear ownership, and architectures that are hard to evolve once they’re in production. In this session, engineering and product leaders unpack how they’re designing modular, multi-component AI systems that can still be understood, governed, and trusted. Expect candid conversations about when modularity actually helps, where it introduces new failure modes, and how teams are thinking about patterns like MCP, agent coordination, and shared infrastructure. Key takeaways: → How teams are structuring modular AI systems without creating brittle dependencies. → Architectural patterns that improve reliability as models, agents, and tools interact. → Where modularity introduces new risks—and how leaders are mitigating them. → How to design systems that stay adaptable as capabilities and requirements evolve