When a standard or classification is built on an idea of “normal”, then the decision-making system will fail those outside that idea. It will fail the extraordinary. It’s similar to what happened in the case of athlete Dutee Chand, who went on to win a landmark case that declared void the sporting standard of testosterone levels. 

After all, nature is not neat. And it’s the outliers that push the species forward – something that automated decision-making systems, including algorithms, don’t really understand. In Dutee's case, understanding the decision making, and the transparency in it, was crucial to her right to redress.

In the final episode of Let’s Talk About Big Data, we talk to Laura Reig, a PhD student at Denmark Technical University, on how AI makes mistakes in gender classification, and Chirag Agarwal, a research fellow at Harvard University, about what explainability in AI means. We also talk to Joy Lu, an associate professor at Carnegie Mellon University, on what makes for a good explanation of what an algorithm is doing.

Listen.

Hosted on Acast. See acast.com/privacy for more information.

Podden och tillhörande omslagsbild på den här sidan tillhör Newslaundry .com. Innehållet i podden är skapat av Newslaundry .com och inte av, eller tillsammans med, Poddtoppen.