Utilizing Various Neural Network Architectures To Play A Game Developed For Human Players

Abstract

Neural Networks have received an explosive amount of attention and interest in recent years. Despite the fact that Neural Network algorithms having existed for many decades, it was not until recent advances in computer hardware that they saw widespread use. This is in no small part due to the success these algorithms have had in tasks ranging from image classification, voice recognition, game theory, and many other applications. Thanks to recent strides in hardware development, most importantly in the advancements in Graphics Processor Units including the capabilities of modern GPU Computing, Neural Networks are now capable of solving tasks at a much higher success rate than other machine learning algorithms [3]. With the success of such Artificial Intelligence agents like DeepMind\u27s AlphaGo and its stronger descendant AlphaGo Zero, the capabilities of Deep Neural Nets (DNNs) have received unprecedented mainstream coverage showing people across the world that these algorithms are capable of Superhuman level of play. After learning all of this I wanted to find a way to apply Artificial Intelligence to a game that was quite difficult for Human players: Cuphead. Cuphead is a classic styled game with terrific level design and music. It is a 2d Run N\u27 Gun shooter also featuring continuous boss fights. The unique nature of the game provides clearly defined win and loss conditions for each level making this game a perfect environment for testing various Artificially Intelligent agents. For my Thesis, I used a Deep Convolutional Neural Network (CNN) to develop an agent capable of playing the first Run N\u27 Gun level of the game. I implemented this neural network using keras and supervised learning. In this thesis, I will outline the exact process I used to train this agent. I will also explain the research I have done into how a Reinforcement Learning agent would be implemented for this game as the Supervised Learning agent is currently only capable of playing the level I trained it on whereas I believe a sufficiently developed Reinforcement Learning agent could learn to play almost any level in the game

Similar works

This paper was published in eGrove (Univ. of Mississippi).

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.