On Efficient Planning in Large Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
A practical challenge in reinforcement learning is large action spaces that make planning computationally demanding. For example, in cooperative multi-agent reinforcement learning, a potentially large number of agents jointly optimize a global reward function, which leads to a blow-up in the action space as the number of agents increases. Building on recent work in planning with local access to a simulator and linear function approximation, we propose efficient algorithms for this setting that lead to polynomial compute and query complexity in all relevant problem parameters. As a minimal requirement, we assume access to an argmax oracle that allows to efficiently compute the greedy policy for any q-function in the model class. For the special case where the feature decomposition is additive, we further improve the bounds if the dimension of the feature space is large relative to the number of additive terms in the feature decomposition. To the best of our knowledge, this work provides the first computationally efficient algorithms with theoretical guarantees for the reinforcement learning problem, when the action space is large and we have access to a simulator.
