Artificial general intelligence, or superintelligence, is not right around the corner like AI companies want you to believe, and that's because intelligence is really hard.
Major AI companies like OpenAI and Anthropic (as well as Ilya Sutskever’s new company) have the explicit goal of creating artificial general intelligence (AGI), and claim to be very close to doing so using technology that doesn’t seem capable of getting us there.
So let's talk about intelligence, both human and artificial.
What is artificial intelligence? What is intelligence? Are we going to be replaced or killed by superintelligence robots? Are we on the precipice of a techno-utopia, or some kind of singularity?
These are the questions I explore, to try to offer a layman’s overview of why we’re far away from AGI and superintelligence. Among other things, I highlight the limitations of current AI systems, including their lack of trustworthiness, reliance on bottom-up machine learning, and inability to provide true reasoning and common sense. I also introduce abductive inference, a rarely discussed type of reasoning.
Why do smart people want us to think that they’ve solved intelligence when they are smart enough to know they haven’t? Keep that question in mind as we go.
Podden och tillhörande omslagsbild på den här sidan tillhör
Chad Woodford. Innehållet i podden är skapat av Chad Woodford och inte av,
eller tillsammans med, Poddtoppen.