6 research outputs found
General Board Game Concepts
Many games often share common ideas or aspects between them, such as their
rules, controls, or playing area. However, in the context of General Game
Playing (GGP) for board games, this area remains under-explored. We propose to
formalise the notion of "game concept", inspired by terms generally used by
game players and designers. Through the Ludii General Game System, we describe
concepts for several levels of abstraction, such as the game itself, the moves
played, or the states reached. This new GGP feature associated with the ludeme
representation of games opens many new lines of research. The creation of a
hyper-agent selector, the transfer of AI learning between games, or explaining
AI techniques using game terms, can all be facilitated by the use of game
concepts. Other applications which can benefit from game concepts are also
discussed, such as the generation of plausible reconstructed rules for
incomplete ancient games, or the implementation of a board game recommender
system
Performance vs. competence in human–machine comparisons
Does the human mind resemble the machines that can behave like it? Biologically inspired machine-learning systems approach “human-level” accuracy in an astounding variety of domains, and even predict human brain activity—raising the exciting possibility that such systems represent the world like we do. However, even seemingly intelligent machines fail in strange and “unhumanlike” ways, threatening their status as models of our minds. How can we know when human–machine behavioral differences reflect deep disparities in their underlying capacities, vs. when such failures are only superficial or peripheral? This article draws on a foundational insight from cognitive science—the distinction between performance and competence—to encourage “species-fair” comparisons between humans and machines. The performance/competence distinction urges us to consider whether the failure of a system to behave as ideally hypothesized, or the failure of one creature to behave like another, arises not because the system lacks the relevant knowledge or internal capacities (“competence”), but instead because of superficial constraints on demonstrating that knowledge (“performance”). I argue that this distinction has been neglected by research comparing human and machine behavior, and that it should be essential to any such comparison. Focusing on the domain of image classification, I identify three factors contributing to the species-fairness of human–machine comparisons, extracted from recent work that equates such constraints. Species-fair comparisons level the playing field between natural and artificial intelligence, so that we can separate more superficial differences from those that may be deep and enduring