This survey explores Parameter-Efficient Continual Fine-Tuning (PECFT), a method that combines the adaptability of Continual Learning (CL) with the efficiency of Parameter-Efficient Fine-Tuning (PEFT) to enable large pre-trained models to learn and adapt to new tasks sequentially without forgetting previous knowledge and without extensive retraining. The paper reviews CL algorithms and PEFT techniques, examining the current state-of-the-art in PECFT, discussing evaluation metrics, and suggesting future research directions to highlight the synergy between these fields for advancing adaptable AI. The authors aim to guide researchers and pave the way for novel research in creating more effective and sustainable machine learning models.
Podden och tillhörande omslagsbild pÄ den hÀr sidan tillhör
Neural Intelligence Network. InnehÄllet i podden Àr skapat av Neural Intelligence Network och inte av,
eller tillsammans med, Poddtoppen.