Today we’re speaking with three researchers: Karan Grewal, Abhi Iyer and Akash Velu, about multi-task learning and how their new brain-inspired approach can help tackle it.
We’ll be discussing what a task is, what exactly we mean by multi-task systems, distances between tasks, the difference between continual learning and multi-task learning, catastrophic forgetting, catastrophic interference and their causes, various approaches out there like context-dependent gating and synaptic intelligence, the role of scale, and sparsity, we’ll cover some basics of the brain like dendrites, proximal and distal dendrites, apical and basal dendrites, how active dendrites can help us solve the challenges of gradient interference in multi-task learning, how they attach dendrites to each of their neurons in their deep learning models, their various approaches to representing context vectors, the challenges of brain-inspired approaches to machine learning, and some speculation about the future of this line of research.
Podden och tillhörande omslagsbild på den här sidan tillhör
AutoML Media. Innehållet i podden är skapat av AutoML Media och inte av,
eller tillsammans med, Poddtoppen.