An artificial intelligence capable of improving itself runs the risk of growing intelligent beyond any human capacity and outside of our control. Josh explains why a superintelligent AI that we haven’t planned for would be extremely bad for humankind. (Original score by Point Lobo.)

Interviewees: Nick Bostrom, Oxford University philosopher and founder of the Future of Humanity Institute; David Pearce, philosopher and co-founder of the World Transhumanist Association (Humanity+); Sebastian Farquahar, Oxford University philosopher.

Learn more about your ad-choices at https://www.iheartpodcastnetwork.com

See omnystudio.com/listener for privacy information.

Podden och tillhörande omslagsbild på den här sidan tillhör iHeartPodcasts. Innehållet i podden är skapat av iHeartPodcasts och inte av, eller tillsammans med, Poddtoppen.

Senast besökta

The End Of The World with Josh Clark

Artificial Intelligence

00:00