Speaker: Doroteo Torre Toledano

Abstract: The current trend in machine learning assumes that there is a fixed distribution of incoming data, so that a fixed model can be learned to map incoming data to output classes. However, real applications in many fields are rarely static. In many cases, input distributions change over time (e.g. language changes over time and depending on topics discussed) or the task itself evolves over time (e.g. new audio classes are incorporated over time). Learning continuously during all model lifetime is fundamental to deploy machine learning solutions robust to these changes. Continual Learning changes the stablished paradigm of model learning and faces novel challenges such as managing the stability-plasticity dilemma that is crucial to avoid one of the main problems of continuous learning: catatrophic forgetting. This field is still relatively young, but will surely be further developed in the coming years. In this talk we introduce the topic of Continual Learning by summarizing the following recent review article:

Cossu, Andrea, et al. “Continual learning for recurrent neural networks: an empirical evaluation.” Neural Networks 143 (2021): 607-627.