The inspiration for this episode is a rather technical tome entitled Information Theory, Inference, and Learning Algorithms by David MacKay. It's basically an infromation theory / machine learning textbook. I initially got it because it's known to be a rewarding work for the most nerdy people in the machine learning (a.k.a. "artificial intelligence") world, who want to get down to fundamentals and understand how concepts from the apparently seperate fields of information theory and inference interrelate.
I haven't finished the book, and as of this writing I'm not actually actively reading it. I still wanted to talk about something from it on the podcast though. In the early chapters of the book, MacKay mentions how learning is, in a way, a kind of information compression. This fascinating idea has been circling in my head for months, and so I wanted to comment on it a bit on this podcast.
Enjoy the episode.
Podden och tillhörande omslagsbild på den här sidan tillhör Stanislaw Pstrokonski. Innehållet i podden är skapat av Stanislaw Pstrokonski och inte av, eller tillsammans med, Poddtoppen.