1,325 research outputs found
AI Researchers, Video Games Are Your Friends!
If you are an artificial intelligence researcher, you should look to video
games as ideal testbeds for the work you do. If you are a video game developer,
you should look to AI for the technology that makes completely new types of
games possible. This chapter lays out the case for both of these propositions.
It asks the question "what can video games do for AI", and discusses how in
particular general video game playing is the ideal testbed for artificial
general intelligence research. It then asks the question "what can AI do for
video games", and lays out a vision for what video games might look like if we
had significantly more advanced AI at our disposal. The chapter is based on my
keynote at IJCCI 2015, and is written in an attempt to be accessible to a broad
audience.Comment: in Studies in Computational Intelligence Studies in Computational
Intelligence, Volume 669 2017. Springe
Evaluating Go Game Records for Prediction of Player Attributes
We propose a way of extracting and aggregating per-move evaluations from sets
of Go game records. The evaluations capture different aspects of the games such
as played patterns or statistic of sente/gote sequences. Using machine learning
algorithms, the evaluations can be utilized to predict different relevant
target variables. We apply this methodology to predict the strength and playing
style of the player (e.g. territoriality or aggressivity) with good accuracy.
We propose a number of possible applications including aiding in Go study,
seeding real-work ranks of internet players or tuning of Go-playing programs
Evolutionary Reinforcement Learning: A Survey
Reinforcement learning (RL) is a machine learning approach that trains agents
to maximize cumulative rewards through interactions with environments. The
integration of RL with deep learning has recently resulted in impressive
achievements in a wide range of challenging tasks, including board games,
arcade games, and robot control. Despite these successes, there remain several
crucial challenges, including brittle convergence properties caused by
sensitive hyperparameters, difficulties in temporal credit assignment with long
time horizons and sparse rewards, a lack of diverse exploration, especially in
continuous search space scenarios, difficulties in credit assignment in
multi-agent reinforcement learning, and conflicting objectives for rewards.
Evolutionary computation (EC), which maintains a population of learning agents,
has demonstrated promising performance in addressing these limitations. This
article presents a comprehensive survey of state-of-the-art methods for
integrating EC into RL, referred to as evolutionary reinforcement learning
(EvoRL). We categorize EvoRL methods according to key research fields in RL,
including hyperparameter optimization, policy search, exploration, reward
shaping, meta-RL, and multi-objective RL. We then discuss future research
directions in terms of efficient methods, benchmarks, and scalable platforms.
This survey serves as a resource for researchers and practitioners interested
in the field of EvoRL, highlighting the important challenges and opportunities
for future research. With the help of this survey, researchers and
practitioners can develop more efficient methods and tailored benchmarks for
EvoRL, further advancing this promising cross-disciplinary research field
Federal Copyright Law in the Computer Era: Protection for the Authors of Video Games
This Comment analyzes both the manner and scope of copyright protection currently afforded computer video games. It then discusses the means available under federal copyright laws to protect the underlying computer program and concludes that the game should be regarded as a unit. The effect of treating the game as a unit of audiovisual and computer elements—as opposed to considering only the audiovisual display—will be to raise certain appropriations to the level of copyright infringement
From Spacewar! to Twitch.tv: The Influence of Competition in Video Games and the Rise of eSports
Since their inception in the 1950s, video games have come a long way; with that advancement came more popularity, a growing demand, and an evolving culture. The first person shooter (FPS) video game genre and the competitive scene that was born out of it is an ideal case study to analyze this change over time. To understand how video games became so popular, one must examine their history: specifically, their development, impacts the games have had on society, and economic trajectories. Similar to traditional professional sports, video games experienced a cultural shift around their lucrative profit margins and unfolding professionalization of gamers as entertainers/athletes. Professional gaming started in the 1980s, where 10,000 participants competed in the Space Invaders Championship. Since then, video games evolved from being a casual past time to a career for some gamers. The resulting professional gaming community has attracted the attention of wealthy businessman, including a disproportionate number of iconic sports names, including the New York Yankees, Golden State Warriors, Magic Johnson, and Robert Kraft, who have all bought into eSports.
All of this is possible due to advancement in technology and significantly improved graphics which allows game developers to increase the amount of content and quality of their games. Without continual advancement in these areas, gamers start to lose interest, which means no economic and societal growth. For example, games released in the early 2000s such as Counter Strike, World of Warcraft, and Halo have utilized online features to allow players to compete with whoever they want from the comfort of home, making it easier than ever for gamers to hone their skills against others. Today, constant updates and new titles are now the norm for successful video game companies; marking this particular industry and accompanying culture as a microcosm for global society at large
Neuroevolution in Games: State of the Art and Open Challenges
This paper surveys research on applying neuroevolution (NE) to games. In
neuroevolution, artificial neural networks are trained through evolutionary
algorithms, taking inspiration from the way biological brains evolved. We
analyse the application of NE in games along five different axes, which are the
role NE is chosen to play in a game, the different types of neural networks
used, the way these networks are evolved, how the fitness is determined and
what type of input the network receives. The article also highlights important
open research challenges in the field.Comment: - Added more references - Corrected typos - Added an overview table
(Table 1
Beyond Monte Carlo Tree Search: Playing Go with Deep Alternative Neural Network and Long-Term Evaluation
Monte Carlo tree search (MCTS) is extremely popular in computer Go which
determines each action by enormous simulations in a broad and deep search tree.
However, human experts select most actions by pattern analysis and careful
evaluation rather than brute search of millions of future nteractions. In this
paper, we propose a computer Go system that follows experts way of thinking and
playing. Our system consists of two parts. The first part is a novel deep
alternative neural network (DANN) used to generate candidates of next move.
Compared with existing deep convolutional neural network (DCNN), DANN inserts
recurrent layer after each convolutional layer and stacks them in an
alternative manner. We show such setting can preserve more contexts of local
features and its evolutions which are beneficial for move prediction. The
second part is a long-term evaluation (LTE) module used to provide a reliable
evaluation of candidates rather than a single probability from move predictor.
This is consistent with human experts nature of playing since they can foresee
tens of steps to give an accurate estimation of candidates. In our system, for
each candidate, LTE calculates a cumulative reward after several future
interactions when local variations are settled. Combining criteria from the two
parts, our system determines the optimal choice of next move. For more
comprehensive experiments, we introduce a new professional Go dataset (PGD),
consisting of 253233 professional records. Experiments on GoGoD and PGD
datasets show the DANN can substantially improve performance of move prediction
over pure DCNN. When combining LTE, our system outperforms most relevant
approaches and open engines based on MCTS.Comment: AAAI 201
- …