19 research outputs found
Player-AI Interaction: What Neural Network Games Reveal About AI as Play
The advent of artificial intelligence (AI) and machine learning (ML) bring
human-AI interaction to the forefront of HCI research. This paper argues that
games are an ideal domain for studying and experimenting with how humans
interact with AI. Through a systematic survey of neural network games (n = 38),
we identified the dominant interaction metaphors and AI interaction patterns in
these games. In addition, we applied existing human-AI interaction guidelines
to further shed light on player-AI interaction in the context of AI-infused
systems. Our core finding is that AI as play can expand current notions of
human-AI interaction, which are predominantly productivity-based. In
particular, our work suggests that game and UX designers should consider flow
to structure the learning curve of human-AI interaction, incorporate
discovery-based learning to play around with the AI and observe the
consequences, and offer users an invitation to play to explore new forms of
human-AI interaction
Interactive Evolution and Exploration within Latent Level-Design Space of Generative Adversarial Networks
Generative Adversarial Networks (GANs) are an emerging form of indirect
encoding. The GAN is trained to induce a latent space on training data, and a
real-valued evolutionary algorithm can search that latent space. Such Latent
Variable Evolution (LVE) has recently been applied to game levels. However, it
is hard for objective scores to capture level features that are appealing to
players. Therefore, this paper introduces a tool for interactive LVE of
tile-based levels for games. The tool also allows for direct exploration of the
latent dimensions, and allows users to play discovered levels. The tool works
for a variety of GAN models trained for both Super Mario Bros. and The Legend
of Zelda, and is easily generalizable to other games. A user study shows that
both the evolution and latent space exploration features are appreciated, with
a slight preference for direct exploration, but combining these features allows
users to discover even better levels. User feedback also indicates how this
system could eventually grow into a commercial design tool, with the addition
of a few enhancements.Comment: GECCO 202
iNNk: A Multi-Player Game to Deceive a Neural Network
This paper presents iNNK, a multiplayer drawing game where human players team
up against an NN. The players need to successfully communicate a secret code
word to each other through drawings, without being deciphered by the NN. With
this game, we aim to foster a playful environment where players can, in a small
way, go from passive consumers of NN applications to creative thinkers and
critical challengers
Möglichkeitsdenken. Utopie und Dystopie in der Gegenwart
Utopien denken Möglichkeiten von Zukunft. Mit Beginn der historischen Moderne, in der die Erwartung an die Zukunft die Erfahrung der Vergangenheit übersteigt, entstehen in der je aktuellen Gegenwart Entwürfe, die Utopien genannt werden können. Die Temporalisierung der Erfahrung macht Projektionen in die Zukunft möglich (Reinhart Koselleck). Diese sind nie eindeutig. Sie liefern mehrdeutige Wunsch- und Schreckbilder auch in eigentümlichen Verschränkungen.
Die Einsicht in diese Dialektik nimmt mit dem Grad der Selbstreferentialität von Zukunftsentwürfen zu; Utopie und Dystopie bedingen sich wechselseitig. – Gegenwärtig leben wir mit außerordentlich unsicheren Zukunftsperspektiven. Haben Utopien nur in Dystopien überlebt? Nach dem Ende des Utopismus-Verdachts am Beginn der 90er Jahre geht es heute um eine Bestandsaufnahme
von Zukunftspotentialen, um Diskussionen von Denkformen des Hypothetisch-Möglichen. Bietet die Tradition des utopischen Denkens Anknüpfungspunkte
fĂĽr aktuelle, positiv oder negativ konnotierte Zukunftsbeschreibungen? Wunsch- oder Warnbilder sind noch immer jenem utopischen Impuls verpflichtet,
der den Blick aus der Gegenwart in die Zukunft richten will. Die Frage nach der Zukunft utopischen Denkens stellt somit in den Möglichkeiten temporalen, visionären und konjunktivischen Denkens zugleich die Frage nach dem Ort des Gesellschaftlichen und der Gesellschaft heute – und damit die Frage nach der Verbindlichkeit von Tradition, und das heißt auch: nach Traditionen
des Utopischen
Approximating Context-Sensitive Program Information
Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). In this paper we propose χ-terms as a mean to capture and manipulate context-sensitive program information in a data-flow analysis. We introduce finite k-approximation and loop approximation that limit the size of the context-sensitive information. These approximated χ-terms form a lattice with a finite depth, thus guaranteeing every data-flow analysis to reach a fixed point.
A Framework for Memory Efficient Context-Sensitive Program Analysis
Static program analysis is in general more precise if it is sensitive to execution contexts (execution paths). But then it is also more expensive in terms of memory consumption. For languages with conditions and iterations, the number of contexts grows exponentially with the program size. This problem is not just a theoretical issue. Several papers evaluating inter-procedural context-sensitive data-flow analysis report severe memory problems, and the path-explosion problem is a major issue in program verification and model checking. In this paper we propose χ-terms as a means to capture and manipulate context-sensitive program information in a data-flow analysis. χ-terms are implemented as directed acyclic graphs without any redundant subgraphs. We introduce the k-approximation and the l-loop-approximation that limit the size of the context-sensitive information at the cost of analysis precision. We prove that every context-insensitive data-flow analysis has a corresponding k, l-approximated context-sensitive analysis, and that these analyses are sound and guaranteed to reach a fixed point. We also present detailed algorithms outlining a compact, redundancy-free, and DAG-based implementation of χ-terms