1,408 research outputs found
Affect and believability in game characters:a review of the use of affective computing in games
Virtual agents are important in many digital environments. Designing a character that highly engages users in terms of interaction is an intricate task constrained by many requirements. One aspect that has gained more attention recently is the effective dimension of the agent. Several studies have addressed the possibility of developing an affect-aware system for a better user experience. Particularly in games, including emotional and social features in NPCs adds depth to the characters, enriches interaction possibilities, and combined with the basic level of competence, creates a more appealing game. Design requirements for emotionally intelligent NPCs differ from general autonomous agents with the main goal being a stronger player-agent relationship as opposed to problem solving and goal assessment. Nevertheless, deploying an affective module into NPCs adds to the complexity of the architecture and constraints. In addition, using such composite NPC in games seems beyond current technology, despite some brave attempts. However, a MARPO-type modular architecture would seem a useful starting point for adding emotions
Trade-Off between Task Accuracy, Task Completion Time and Naturalness for Direct Object Manipulation in Virtual Reality
Virtual reality devices are used for several application domains, such as medicine, entertainment, marketing and training. A handheld controller is the common interaction method for direct object manipulation in virtual reality environments. Using hands would be a straightforward way to directly manipulate objects in the virtual environment if hand-tracking technology were reliable enough. In recent comparison studies, hand-based systems compared unfavorably against the handheld controllers in task completion times and accuracy. In our controlled study, we com-pare these two interaction techniques with a new hybrid interaction technique which combines the controller tracking with hand gestures for a rigid object manipulation task. The results demonstrate that the hybrid interaction technique is the most preferred because it is intuitive, easy to use, fast, reliable and it provides haptic feedback resembling the real-world object grab. This suggests that there is a trade-off between naturalness, task accuracy and task completion time when using these direct manipulation interaction techniques, and participants prefer to use interaction techniques that provide a balance between these three factors.publishedVersionPeer reviewe
Design Strategies for Adaptive Social Composition: Collaborative Sound Environments
In order to develop successful collaborative music systems a variety
of subtle interactions need to be identified and integrated. Gesture
capture, motion tracking, real-time synthesis, environmental
parameters and ubiquitous technologies can each be effectively used
for developing innovative approaches to instrument design, sound
installations, interactive music and generative systems. Current
solutions tend to prioritise one or more of these approaches, refining
a particular interface technology, software design or compositional
approach developed for a specific composition, performer or
installation environment. Within this diverse field a group of novel
controllers, described as ‘Tangible Interfaces’ have been developed.
These are intended for use by novices and in many cases follow a
simple model of interaction controlling synthesis parameters through
simple user actions. Other approaches offer sophisticated
compositional frameworks, but many of these are idiosyncratic and
highly personalised. As such they are difficult to engage with and
ineffective for groups of novices. The objective of this research is to
develop effective design strategies for implementing collaborative
sound environments using key terms and vocabulary drawn from the
available literature. This is articulated by combining an empathic
design process with controlled sound perception and interaction
experiments. The identified design strategies have been applied to
the development of a new collaborative digital instrument. A range
of technical and compositional approaches was considered to define
this process, which can be described as Adaptive Social Composition.
Dan Livingston
Developing a Framework for Heterotopias as Discursive Playgrounds: A Comparative Analysis of Non-Immersive and Immersive Technologies
The discursive space represents the reordering of knowledge gained through
accumulation. In the digital age, multimedia has become the language of
information, and the space for archival practices is provided by non-immersive
technologies, resulting in the disappearance of several layers from discursive
activities. Heterotopias are unique, multilayered epistemic contexts that
connect other systems through the exchange of information. This paper describes
a process to create a framework for Virtual Reality, Mixed Reality, and
personal computer environments based on heterotopias to provide absent layers.
This study provides virtual museum space as an informational terrain that
contains a "world within worlds" and presents place production as a layer of
heterotopia and the subject of discourse. Automation for the individual
multimedia content is provided via various sorting and grouping algorithms, and
procedural content generation algorithms such as Binary Space Partitioning,
Cellular Automata, Growth Algorithm, and Procedural Room Generation. Versions
of the framework were comparatively evaluated through a user study involving 30
participants, considering factors such as usability, technology acceptance, and
presence. The results of the study show that the framework can serve diverse
contexts to construct multilayered digital habitats and is flexible for
integration into professional and daily life practices
Music Information Retrieval Meets Music Education
This paper addresses the use of Music Information Retrieval (MIR) techniques in music education and their integration in learning software. A general overview of systems that are either commercially available or in research stage is presented. Furthermore, three well-known MIR methods used in music learning systems and their state-of-the-art are described: music transcription, solo and accompaniment track creation, and generation of performance instructions. As a representative example of a music learning system developed within the MIR community, the Songs2See software is outlined. Finally, challenges and directions for future research are described
An Actor-Centric Approach to Facial Animation Control by Neural Networks For Non-Player Characters in Video Games
Game developers increasingly consider the degree to which character animation emulates facial expressions found in cinema. Employing animators and actors to produce cinematic facial animation by mixing motion capture and hand-crafted animation is labor intensive and therefore expensive. Emotion corpora and neural network controllers have shown promise toward developing autonomous animation that does not rely on motion capture. Previous research and practice in disciplines of Computer Science, Psychology and the Performing Arts have provided frameworks on which to build a workflow toward creating an emotion AI system that can animate the facial mesh of a 3d non-player character deploying a combination of related theories and methods. However, past investigations and their resulting production methods largely ignore the emotion generation systems that have evolved in the performing arts for more than a century. We find very little research that embraces the intellectual process of trained actors as complex collaborators from which to understand and model the training of a neural network for character animation. This investigation demonstrates a workflow design that integrates knowledge from the performing arts and the affective branches of the social and biological sciences. Our workflow begins at the stage of developing and annotating a fictional scenario with actors, to producing a video emotion corpus, to designing training and validating a neural network, to analyzing the emotion data annotation of the corpus and neural network, and finally to determining resemblant behavior of its autonomous animation control of a 3d character facial mesh. The resulting workflow includes a method for the development of a neural network architecture whose initial efficacy as a facial emotion expression simulator has been tested and validated as substantially resemblant to the character behavior developed by a human actor
- …