Forward Model Learning with an Entity-Based Representation for games
Date
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Reinforcement learning (RL) is a powerful way of solving sequential decision-making tasks in which the agent’s goal is to learn how to maximize its reward.RL approaches can be divided into 2 different categories: model-based approaches that learn with the help of a model of the environment, and model-free approaches where the agent only learns by maximizing its reward. However, training an RL agent in a complex environment requires a vast amount of training data which can be expensive in tasks where the agent is expected to operate close to humans. In this work, we explore building a simulated environment, using a new representation of game frames. We use this representation to pre-train an agent and transfer the learned policy to the real environment. Our model can simulate the changes of a real environment in response to an agent’s action and returns a reward value. Our major contribution is presenting a new entity-based representation for game frames and using the proposed representation as our baseline to train our virtual environment. Our work outperforms an existing method for learning a model of an environment with significantly less training data
