124 research outputs found
Emulating Human Play in a Leading Mobile Card Game
Monte Carlo Tree Search (MCTS) has become a popular solution for game AI, capable of creating strong game playing opponents. However, the emergent playstyle of agents using MCTS is not neces- sarily human-like, believable or enjoyable. AI Factory Spades, currently the top rated Spades game in the Google Play store, uses a variant of MCTS to control AI allies and opponents. In collaboration with the developers, we showed in a previous study that the playstyle of human players significantly differed from that of the AI players [1]. This article presents a method for player modelling using gameplay data and neural networks that does not require domain knowledge, and a method of biasing MCTS with such a player model to create Spades playing agents that emulate human play whilst maintaining strong, competitive performance. The methods of player modelling and biasing MCTS presented in this study are applied to the commercial codebase of AI Factory Spades, and are transferable to MCTS implementations for discrete-action games where relevant gameplay data is available
A^2-Net: Molecular Structure Estimation from Cryo-EM Density Volumes
Constructing of molecular structural models from Cryo-Electron Microscopy
(Cryo-EM) density volumes is the critical last step of structure determination
by Cryo-EM technologies. Methods have evolved from manual construction by
structural biologists to perform 6D translation-rotation searching, which is
extremely compute-intensive. In this paper, we propose a learning-based method
and formulate this problem as a vision-inspired 3D detection and pose
estimation task. We develop a deep learning framework for amino acid
determination in a 3D Cryo-EM density volume. We also design a sequence-guided
Monte Carlo Tree Search (MCTS) to thread over the candidate amino acids to form
the molecular structure. This framework achieves 91% coverage on our newly
proposed dataset and takes only a few minutes for a typical structure with a
thousand amino acids. Our method is hundreds of times faster and several times
more accurate than existing automated solutions without any human intervention.Comment: 8 pages, 5 figures, 4 table
Survey of Artificial Intelligence for Card Games and Its Application to the Swiss Game Jass
In the last decades we have witnessed the success of applications of
Artificial Intelligence to playing games. In this work we address the
challenging field of games with hidden information and card games in
particular. Jass is a very popular card game in Switzerland and is closely
connected with Swiss culture. To the best of our knowledge, performances of
Artificial Intelligence agents in the game of Jass do not outperform top
players yet. Our contribution to the community is two-fold. First, we provide
an overview of the current state-of-the-art of Artificial Intelligence methods
for card games in general. Second, we discuss their application to the use-case
of the Swiss card game Jass. This paper aims to be an entry point for both
seasoned researchers and new practitioners who want to join in the Jass
challenge
Analysis of gameplay strategies in hearthstone: a data science approach
In recent years, games have been a popular test bed for AI research, and the presence of Collectible Card Games (CCGs) in that space is still increasing. One such CCG for both competitive/casual play and AI research is Hearthstone, a two-player adversarial game where players seeks to implement one of several gameplay strategies to defeat their opponent and decrease all of their Health points to zero. Although some open source simulators exist, some of their methodologies for simulated agents create opponents with a relatively low skill level. Using evolutionary algorithms, this thesis seeks to evolve agents with a higher skill level than those implemented in one such simulator, SabberStone. New benchmarks are propsed using supervised learning techniques to predict gameplay strategies from game data, and using unsupervised learning techniques to discover and visualize patterns that may be used in player modeling to differentiate gameplay strategies
Building Machines That Learn and Think Like People
Recent progress in artificial intelligence (AI) has renewed interest in
building systems that learn and think like people. Many advances have come from
using deep neural networks trained end-to-end in tasks such as object
recognition, video games, and board games, achieving performance that equals or
even beats humans in some respects. Despite their biological inspiration and
performance achievements, these systems differ from human intelligence in
crucial ways. We review progress in cognitive science suggesting that truly
human-like learning and thinking machines will have to reach beyond current
engineering trends in both what they learn, and how they learn it.
Specifically, we argue that these machines should (a) build causal models of
the world that support explanation and understanding, rather than merely
solving pattern recognition problems; (b) ground learning in intuitive theories
of physics and psychology, to support and enrich the knowledge that is learned;
and (c) harness compositionality and learning-to-learn to rapidly acquire and
generalize knowledge to new tasks and situations. We suggest concrete
challenges and promising routes towards these goals that can combine the
strengths of recent neural network advances with more structured cognitive
models.Comment: In press at Behavioral and Brain Sciences. Open call for commentary
proposals (until Nov. 22, 2016).
https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/information/calls-for-commentary/open-calls-for-commentar
Machine learning methods applied to the dots and boxes board game
Pontos e Quadrados (Dots and Boxes na versão anglo-saxónica) é um jogo clássico
de tabuleiro no qual os jogadores unem quatro pontos próximos numa grelha para
criar o maior número possÃvel de quadrados. Este trabalho irá inverstigar técnicas de
aprendizagem profunda e aprendizagem por reforço, que torna possÃvel um programa
de computador aprender como jogar o jogo, sem nenhuma interação humana, e aplicar
o mesmo ao jogo Dots and Boxes; a abordagem usada no DeepMind AlphaZero será
analisada. O AlphaZero combina uma rede neural convolucional e o algoritmo Monte
Carlo Tree Search para alcançar um desempenho super humano, sem conhecimento
prévio, em jogos como o Xadrez, Go, e Shogi.
Os resultados obtidos permitem aferir sobre a adequação da abordagem ao jogo
Pontos e Quadrados.Dots and Boxes is a classical board game in which players connect four nearest dots
in a grid to create the maximum possible number of boxes. This work will investigate
deep learning techniques with reinforcement learning to make possible a computer
program to learn how to play the game, without human interaction, and apply it to the
Dots and Boxes board game; the approach beyond DeepMind AlphaZero being taken
as the approach to follow. AlphaZero makes a connection between a Convolutional
Neural Network and the Monte Carlo Tree Search algorithm to achieve superhuman
performance, starting from no a priori knowledge in games such as Chess, Go, and
Shogi.
The results obtained allow to measure the approach adequacy to the game Dots and
Boxes
- …