Today we’re joined by Hanna Wallach, a Principal Researcher at Microsoft Research.

Hanna and I really dig into how bias and a lack of interpretability and transparency show up across ML. We discuss the role that human biases, even those that are inadvertent, play in tainting data, and whether deployment of “fair” ML models can actually be achieved in practice, and much more. Hanna points us to a TON of resources to further explore the topic of fairness in ML, which you’ll find at twimlai.com/talk

Podden och tillhörande omslagsbild på den här sidan tillhör Sam Charrington. Innehållet i podden är skapat av Sam Charrington och inte av, eller tillsammans med, Poddtoppen.