An Empirical Study of Exploration Strategies for Model-Free Reinforcement Learning
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Reinforcement Learning is a formalism for learning by trial and error. Unfortunately, trial and error can take a long time to find a solution if the agent does not efficiently explore the behaviours available to it. Moreover, how an agent ought to explore depends on the task that the agent is trying to learn. In this thesis we study how an agent's exploration strategy affects how quickly it learns to solve different problems. In particular, we focus on model-free algorithms that learn value functions. We first examine the space of problems, or environments, that reinforcement learning agents are expected to solve. We identify six properties of environments that can make exploration difficult, and design a prototypical environment expressing each property. We also survey the exploration literature and categorize existing exploration methods by the heuristic that guides their behaviour. Lastly, we conduct an empirical study evaluating the performance of several exploration methods on our prototypical exploration environments. We found that only one method, Linear Upper Confidence Least Squares, was able to consistently perform well in every environment. We also found that methods which add a bonus to their value function tended to explore much more effectively than methods which add a bonus to their rewards. Our investigation of value-based exploration provides a novel, systematic approach to understanding the strengths and weaknesses of exploration algorithms in reinforcement learning.
