Continual Auxiliary Task Learning
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Learning auxiliary tasks, such as multiple predictions about the world, can provide many benets to reinforcement learning systems. A variety of off-policy learning algorithms have been developed to learn such predictions, but as yet there is little work on how to adapt the behavior to gather useful data for those off-policy predictions. In this thesis, we investigate a reinforcement learning system designed to learn a collection of auxiliary tasks, with a behavior policy that learns to take actions to improve the auxiliary predictions. We highlight the inherent non-stationarity in this continual auxiliary task learning problem, for both prediction learners and the behaviour learner. We develop an algorithm based on successor features that facilitates tracking under nonstationary rewards and propose how behaviour can be specialized to learn areas of interest for a prediction learner. We conduct an in-depth study into the resulting multi-prediction learning system.
