Improving the reliability of reinforcement learning algorithms through biconjugate Bellman errors
Date
Author
Institution
Degree Level
Degree
Department
Specialization
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
In this thesis, we seek to improve the reliability of reinforcement learning algorithms for nonlinear function approximation. Semi-gradient temporal difference (TD) update rules form the basis of most state-of-the-art value function learning systems despite clear counterexamples proving their potential instability. Gradient TD updates have provable stability under broad conditions, yet significantly underperform semi-gradient approaches on several problems of interest. In this thesis, we present a simple modification to an existing gradient TD method, prove that this method—called TDRC—remains stable, and show empirically that TDRC performs comparatively with semi-gradient approaches. Taking advantage of the connection between Fenchel duality and orthogonal projections, we justify the use of nonlinear value function approximation using gradient TD updates and show that these methods continue to inherit improved reliability over semi-gradient approaches in the nonlinear function approximation setting. We then extend this method to value-based control with neural networks and empirically validate its performance compared to semi-gradient methods. Finally, we propose two novel statistically robust losses—the mean Huber Projected Bellman error and the mean absolute Projected Bellman Error—and derive a family of off-policy gradient TD algorithms to optimize these losses for both prediction and control.
