8,176 research outputs found
Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control
Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ânaturalâ) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in the trade-off they offer between motion naturalness and the control that can be exerted over the motion. We show mechanisms to parametrize, combine (on different body parts) and concatenate motions generated by different animation techniques. We discuss several aspects of motion naturalness and show how it can be evaluated. We conclude by showing the promise of combinations of different animation paradigms to enhance both naturalness and control
An End-to-End Conversational Style Matching Agent
We present an end-to-end voice-based conversational agent that is able to
engage in naturalistic multi-turn dialogue and align with the interlocutor's
conversational style. The system uses a series of deep neural network
components for speech recognition, dialogue generation, prosodic analysis and
speech synthesis to generate language and prosodic expression with qualities
that match those of the user. We conducted a user study (N=30) in which
participants talked with the agent for 15 to 20 minutes, resulting in over 8
hours of natural interaction data. Users with high consideration conversational
styles reported the agent to be more trustworthy when it matched their
conversational style. Whereas, users with high involvement conversational
styles were indifferent. Finally, we provide design guidelines for multi-turn
dialogue interactions using conversational style adaptation
Introduction: The Third International Conference on Epigenetic Robotics
This paper summarizes the paper and poster contributions
to the Third International Workshop on
Epigenetic Robotics. The focus of this workshop is
on the cross-disciplinary interaction of developmental
psychology and robotics. Namely, the general
goal in this area is to create robotic models of the
psychological development of various behaviors. The
term "epigenetic" is used in much the same sense as
the term "developmental" and while we could call
our topic "developmental robotics", developmental
robotics can be seen as having a broader interdisciplinary
emphasis. Our focus in this workshop is
on the interaction of developmental psychology and
robotics and we use the phrase "epigenetic robotics"
to capture this focus
Towards Personalities for Animated Agents With Reactive and Planning Behaviors
We describe a framework for creating animated simulations of virtual human agents. The framework allows us to capture flexible patterns of activity, reactivity to a changing environment, and certain aspects of an agent personality model. Each leads to variation in how an animated simulation will be realized. As different parts of an activity make different demands oil an agent\u27s resources and decision-making, our framework allows special-purpose reasoners and planners to be associated with only those phases of an activity where they are needed. Personality is reflected in locomotion choices which are guided by an agent model that interacts with the other components of the framework
Social Cognition for Human-Robot SymbiosisâChallenges and Building Blocks
The next generation of robot companions or robot working partners will need to satisfy social requirements somehow similar to the famous laws of robotics envisaged by Isaac Asimov time ago (Asimov, 1942). The necessary technology has almost reached the required level, including sensors and actuators, but the cognitive organization is still in its infancy and is only partially supported by the current understanding of brain cognitive processes. The brain of symbiotic robots will certainly not be a âpositronicâ replica of the human brain: probably, the greatest part of it will be a set of interacting computational processes running in the cloud. In this article, we review the challenges that must be met in the design of a set of interacting computational processes as building blocks of a cognitive architecture that may give symbiotic capabilities to collaborative robots of the next decades: (1) an animated body-schema; (2) an imitation machinery; (3) a motor intentions machinery; (4) a set of physical interaction mechanisms; and (5) a shared memory system for incremental symbiotic development. We would like to stress that our approach is totally un-hierarchical: the five building blocks of the shared cognitive architecture are fully bi-directionally connected. For example, imitation and intentional processes require the âservicesâ of the animated body schema which, on the other hand, can run its simulations if appropriately prompted by imitation and/or intention, with or without physical interaction. Successful experiences can leave a trace in the shared memory system and chunks of memory fragment may compete to participate to novel cooperative actions. And so on and so forth. At the heart of the system is lifelong training and learning but, different from the conventional learning paradigms in neural networks, where learning is somehow passively imposed by an external agent, in symbiotic robots there is an element of free choice of what is worth learning, driven by the interaction between the robot and the human partner. The proposed set of building blocks is certainly a rough approximation of what is needed by symbiotic robots but we believe it is a useful starting point for building a computational framework
Modeling flocks with perceptual agents from a dynamicist perspective
Computational simulations of flocks and crowds have typically been processed by a set of logic or syntactic rules. In recent decades, a new generation of systems has emerged from dynamicist approaches in which the agents and the environment are treated as a pair of dynamical systems coupled informationally and mechanically. Their spontaneous interactions allow them to achieve the desired behavior. The main proposition assumes that the agent does not need a full model or to make inferences before taking actions; rather, the information necessary for any action can be derived from the environment with simple computations and very little internal state. In this paper, we present a simulation framework in which the agents are endowed with a sensing device, an oscillator network as controller and actuators to interact with the environment. The perception device is designed as an optic array emulating the principles of the animal retina, which assimilates stimuli resembling optic flow to be captured from the environment. The controller modulates informational variables to action variables in a sensory-motor flow. Our approach is based on the Kuramoto model that describes mathematically a network of coupled phase oscillators and the use of evolutionary algorithms, which is proved to be capable of synthesizing minimal synchronization strategies based on the dynamical coupling between agents and environment. We carry out a comparative analysis with classical implementations taking into account several criteria. It is concluded that we should consider replacing the metaphor of symbolic information processing by that of sensory-motor coordination in problems of multi-agent organizations
- âŠ