Today we’re joined by Tijmen Blankevoort, a staff engineer at Qualcomm, who leads their compression and quantization research teams. In our conversation with Tijmen we discuss: 

• The ins and outs of compression and quantization of ML models, specifically NNs,

• How much models can actually be compressed, and the best way to achieve compression, 

• We also look at a few recent papers including “Lottery Hypothesis."

Podden och tillhörande omslagsbild på den här sidan tillhör Sam Charrington. Innehållet i podden är skapat av Sam Charrington och inte av, eller tillsammans med, Poddtoppen.