Joint Level Generation and Translation Using Gameplay Videos
Date
Author
Institution
Degree Level
Degree
Department
Supervisor / Co-Supervisor and Their Department(s)
Citation for Previous Publication
Link to Related Item
Abstract
Procedural Content Generation via Machine Learning (PCGML) faces a significant hurdle that sets it apart from other ML problems, such as image or text generation, which is limited annotated data. For example, many existing methods for level generation via machine learning specifically require a secondary representation beyond level images. However, the current methods for obtaining such representations are laborious and time-consuming, which contributes to the limited data problem. In this work, we aim to address the limited game level data problem by utilizing gameplay videos of human-annotated games to train a novel multi-tail framework to perform simultaneous level translation and generation. The translation tail of our framework can convert gameplay video frames to an equivalent secondary representation, while its generation tail can produce novel level segments. Evaluation results and comparisons between our framework and baselines suggest that combining the level generation and translation tasks can lead to improved performance for both tasks. Additionally, we have conducted experiments to evaluate the generalizability of our model across different scenarios. Our findings represent a possible solution to limited annotated level data, and we demonstrate the potential for future iterations of our model to generalize to unseen games.
