Back to all publications...

Continual Learning via Function-Space Variational Inference

Continual learning is the process of developing new abilities while retaining existing ones. Sequential Bayesian inference over predictive functions is a natural framework for doing this, but applying it to deep neural networks is challenging in practice. To address the drawbacks of existing approaches, we formulate continual learning as sequential function-space variational inference. From this formulation, we derive a tractable variational objective that explicitly encourages a neural network to fit data from new tasks while also matching posterior distributions over functions inferred from previous tasks. Crucially, the objective is expressed purely in terms of predictive functions. This way, the parameters of the neural network can vary widely during training, allowing easier adaptation to new tasks than is possible with techniques that directly regularize parameters. We demonstrate that, across a range of task sequences, neural networks trained via sequential function-space variational inference achieve better predictive accuracy than networks trained with related methods, while depending less on careful coreset selection.


Tim G. J. Rudner, Freddie Bickford Smith, Qixuan Feng, Yee Whye Teh, Yarin Gal
ICML Workshop on Theory and Foundations of Continual Learning, 2021
ICML Workshop on Subset Selection in Machine Learning, From Theory to Applications, 2021
[Preprint] [BibTex]

Are you looking to do a PhD in machine learning? Did you do a PhD in another field and want to do a postdoc in machine learning? Would you like to visit the group?

How to apply


Contact

We are located at
Department of Computer Science, University of Oxford
Wolfson Building
Parks Road
OXFORD
OX1 3QD
UK
Twitter: @OATML_Oxford
Github: OATML
Email: oatml@cs.ox.ac.uk