Partnership opportunities

Register

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Bhanu
Prakash Reddy Rella
Senior Data Engineer
Meta
Bhanu Prakash Reddy Rella is an experienced data and AI engineer specialising in energy-efficient AI, sustainable data platforms, and large-scale distributed systems. He is a Senior Data Engineer at Meta, where he works on the design and optimisation of high-volume data pipelines and AI analytics systems used across global products. His work focuses on building scalable, reliable data ecosystems while improving computational efficiency and reducing energy consumption in AI workloads. Bhanu has led end-to-end architectures supporting machine learning pipelines, real-time analytics, and large-scale decision systems across complex, high-traffic environments. Prior to Meta, he contributed to enterprise data platforms in Fortune 50 organisations, working across advertising intelligence, cloud modernisation, analytics automation, and real-time data systems. Bhanu is also the founder of the Green AI Initiative, a global community focused on advancing sustainable AI and low-energy data practices, and the author of Energy-Efficient Computing for Modern AI Applications, which explores practical approaches to reducing the environmental impact of AI systems.
Button
15 April 2026 12:00 - 12:30
Panel | From single models to modular systems: Architecting reliable next generation AI
As teams move beyond simple “one LLM + prompt” prototypes, their stacks start to look more like systems: multiple models, agents, tools, data layers, and evaluation loops all stitched together. With that shift comes a new set of headaches unexpected behaviour at scale, fragile orchestration, unclear ownership, and architectures that are hard to evolve once they’re in production. In this session, engineering and product leaders unpack how they’re designing modular, multi-component AI systems that can still be understood, governed, and trusted. Expect candid conversations about when modularity actually helps, where it introduces new failure modes, and how teams are thinking about patterns like MCP, agent coordination, and shared infrastructure. Key takeaways: → How teams are structuring modular AI systems without creating brittle dependencies. → Architectural patterns that improve reliability as models, agents, and tools interact. → Where modularity introduces new risks—and how leaders are mitigating them. → How to design systems that stay adaptable as capabilities and requirements evolve