Non-restricted Winter 2026 convocation theses and dissertations will be discoverable in ERA on March 16. Congratulations to all our graduates!

Adaptive Search Control through Meta-Gradient Reinforcement Learning

Loading...
Thumbnail Image

Institution

http://id.loc.gov/authorities/names/n79058482

Degree Level

Master's

Degree

Master of Science

Department

Department of Computing Science

Supervisor / Co-Supervisor and Their Department(s)

Citation for Previous Publication

Link to Related Item

Abstract

In model-based reinforcement learning, an agent can improve its policy by planning: learning from experience generated by a model. Search control is the problem of determining which starting state should be used to generate this experience. Given a limited planning budget, an agent should be efficient in selecting experience, as some states may provide much greater value to learning than others. Particularly in complicated environments with large state spaces, an agent could easily waste time planning in states which have little effect on policy improvement. Thus, search control should carefully select starting states to maximize planning efficiency. In this work, we study effective search control and examine how search control can be designed when an agent has access to either a perfect or imperfect model. We show that in a non-stationary environment, search control can be more effective by focusing updates on states with high value error. While using an imperfect model, it can be effective to avoid learning from states where the model produces erroneous reward predictions. However, in most cases it is not feasible to hand-design a search control strategy for a novel environment. Towards this end, we introduce a novel algorithm which uses meta-learning to adjust an agent's search control strategy while it learns a policy. We empirically evaluate the ability of this algorithm to improve the efficiency of planning in a stochastic and non-stationary environment. The results demonstrate that this algorithm outperforms several fixed search control approaches and learns behaviours similar to baselines hand-designed to perform well in our test environment.

Item Type

http://purl.org/coar/resource_type/c_46ec

Alternative

License

Other License Text / Link

This thesis is made available by the University of Alberta Libraries with permission of the copyright owner solely for non-commercial purposes. This thesis, or any portion thereof, may not otherwise be copied or reproduced without the written consent of the copyright owner, except to the extent permitted by Canadian copyright law.

Language

en

Location

Time Period

Source