In this episode I caught up with Yotam Azriel to learn about interpretable deep learning. Deep learning models are often criticised for being "black box" due to their complex architectures and large number of parameters. Model interpretability is crucial as it enables stakeholders to make informed decisions based on insights into how predictions were made. I think this is an important topic and I learned a lot about the sophisticated techniques and engineering required to develop a platform for model interpretability. You can also view the video of this recording on YouTube.

* tensorleap.ai

* Yotam on Linkedin

Bio: Yotam is an expert in machine and deep learning, with ten years of experience in these fields. He has been involved in massive military and government development projects, as well as with startups. Yotam developed and led AI projects from research to production and he also acts as a professional consultant to companies developing AI. His expertise includes image and video recognition, NLP, algo-trading, and signal analysis. Yotam is an autodidact with strong leadership qualities and great communication skills.

This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com

Podden och tillhörande omslagsbild på den här sidan tillhör Robin Cole. Innehållet i podden är skapat av Robin Cole och inte av, eller tillsammans med, Poddtoppen.