In this episode of Computer Vision Decoded, we are going to dive into implicit neural representations.

We are joined by Itzik Ben-Shabat, a Visiting Research Fellow at the Australian National Universit (ANU) and Technion – Israel Institute of Technology as well as the host of the Talking Paper Podcast.

You will learn a core understanding of implicit neural representations, key concepts and terminology, how it's being used in applications today, and Itzik's research into improving output with limit input data.

Episode timeline:

00:00 Intro
01:23 Overview of what implicit neural representations are
04:08 How INR compares and contrasts with a NeRF
08:17 Why did Itzik pursued this line of research
10:56 What is normalization and what are normals
13:13 Past research people should read to learn about the basics of INR
16:10 What is an implicit representation (without the neural network)
24:27 What is DiGS and what problem with INR does it solve?
35:54 What is OG-I NR and what problem with INR does it solve?
40:43 What software can researchers use to understand INR?
49:15 What information should non-scientists be focused to learn about INR?

Itzik's Website: https://www.itzikbs.com/
Follow Itzik on Twitter: https://twitter.com/sitzikbs
Follow Itzik on LinkedIn: https://www.linkedin.com/in/yizhak-itzik-ben-shabat-67b3b1b7/
Talking Papers Podcast: https://talking.papers.podcast.itzikbs.com/

Follow Jared Heinly on Twitter: https://twitter.com/JaredHeinly
Follow Jonathan Stephens on Twitter at: https://twitter.com/jonstephens85

Referenced past episode- What is CVPR: https://share.transistor.fm/s/15edb19d

This episode is brought to you by EveryPoint. Learn more about how EveryPoint is building an infinitely scalable data collection and processing platform for the next generation of spatial computing applications and services: https://www.everypoint.io

Podden och tillhörande omslagsbild på den här sidan tillhör EveryPoint. Innehållet i podden är skapat av EveryPoint och inte av, eller tillsammans med, Poddtoppen.