Scrutinizing AI Models to Reduce Unintended Bias | Research
Event Overview
Today's artificial intelligence models are revolutionary — reshaping industries by automating complex business decisions previously left for experts. Success with AI, as many now know, is predicated on data. So, what happens when the data we feed our AI models is inherently biased? Join WWT Data Scientist Charlene Ulrich and Big Data Consultant Daniel Cholakov as they talk about the growing need for leaders to scrutinize AI and natural language processing (NLP) models for bias. Charlene and Daniel use a recent WWT Research paper on mitigating bias in AI using debias-GAN as a jumping off point for a conversation that includes why de-biasing AI is important, where and how bias can take place and how you can use our findings to to improve your AI strategies.
Featured Speakers
What to expect
- Get hands on, on demand experience
- Capture real-world insights and research
- Leverage practical and actionable guidance
- Compare, contrast and validate multi-vendor solutions
- Think creatively about strategy
- Tap into our industry-leading expertise and partnerships
Goals and Objectives
Gain clarity on the important issue of bias in AI, understand how to quantify bias in AI, learn about WWT's latest research on the topic and leave with steps on how to bake fairness into your AI strategies.
Who should attend?
Data scientists, engineers, and consultants looking to stay ahead of the curve in AI and ML technologies.
Related content
Mitigating Bias in AI Using Debias-GAN