Register your interest

Watch OnDemand

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Karen
Mcneil, phd
Director, LLM Red Teaming
Innodata
Dr. Karen McNeil is the Director of Red Teaming at Innodata, where she leads the product, strategy, and quality of AI adversarial testing services. She specializes in evaluating and stress-testing LLM security, designing methodologies to uncover model vulnerabilities, and training expert red teaming writers. Karen has played a key role in developing systematic approaches to LLM adversarial evaluation, including taxonomy-driven testing and automated red teaming strategies. Before joining Innodata, Karen was a Language Engineer at Amazon Web Services. She holds a PhD in Arabic Linguistics from Georgetown University, where her research focused on computational approaches to dialectal Arabic. She has published extensively on the development of Tunisian Arabic and is also an accomplished literary translator, with works including the acclaimed novel A Calamity of Noble Houses by Amira Ghenim.
12 February 2025 09:30 - 10:00
Deploying GenAI applications safely and responsibly
As generative AI systems rapidly transition from research labs to real-world applications, ensuring reliability, safety, and trust is more critical than ever. In this session, we explore how to balance innovation with responsibility by diving into the entire lifecycle of AI model development—from selecting the right model and crafting effective prompts to fine-tuning on private datasets and conducting rigorous, comprehensive testing. We will discuss how to detect and mitigate pitfalls such as hallucinations and factually irrelevant outputs, highlighting the importance of both manual red-teaming approaches and automated benchmarking tools. Through real-world examples, you’ll see where AI models can fail and how to address vulnerabilities. We will also introduce new strategies for continuous monitoring and feedback, ensuring your AI stays aligned and resilient in production. Our live demo will showcase automated evaluation models and real-time red-teaming, equipping you with the knowledge to deploy AI solutions responsibly—so you can innovate without compromising trust or integrity.