No More Pesky Hyperparameters: Offline Hyperparameter Tuning For Reinforcement Learning
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
The performance of reinforcement learning (RL) agents is sensitive to the choice of hyperparameters. In real-world settings like robotics or industrial control systems, however, testing different hyperparameter configurations directly on the environment can be financially prohibitive, dangerous, or time consuming. We propose a new approach to tune hyperparameters from offline logs of data, to fully specify the hyperparameters for an RL agent that learns online in the real world. The approach is conceptually simple: we first learn a model of the environment from the offline data, which we call a calibration model, and then simulate learning in the calibration model using several hyperparameters. We evaluate the hyperparameters inside the calibration model based on some desirable performance criterion, and then identify promising hyperparameters for deployment. We identify several criteria to make this strategy effective, and develop an approach that satisfies these criteria. We empirically investigate the method in a variety of settings to identify when it is effective and when it fails. We demonstrate that tuning hyperparameters offline and deploying an RL agent with these hyperparameters is a more feasible problem to tackle than transferring a fixed policy learned from the offline data.
