State Evaluation and Opponent Modelling in Real-Time Strategy Games
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Examining Committee Member(s) and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Designing competitive Artificial Intelligence (AI) systems for Real-Time Strategy (RTS) games often requires a large amount of expert knowledge (resulting in hard-coded rules for the AI system to follow). However, aspects of an RTS agent can be learned from human replay data. In this thesis, we present two ways in which information relevant to AI system design can be learned from replays, using the game StarCraft for experimentation. First we examine the problem of constructing build-order game payoff matrices from replay data, by clustering build-orders from real games. Clusters can be regarded as strategies and the resulting matrix can be populated with the results from the replay data. The matrix can be used to both examine the balance of a game and find which strategies are effective against which other strategies. Next we look at state evaluation and opponent modelling. We identify important features for predicting which player will win a given match. Model weights are learned from replays using logistic regression. We also present a metric for estimating player skill, which can be used as features in the predictive model, that is computed using a battle simulation as a baseline to compare player performance against. We test our model on human replay data giving a prediction accuracy of > 70% in later game states. Additionally, our player skill estimation technique is tested using data from a StarCraft AI system tournament, showing correlation between skill estimates and tournament standings.
