7,015 research outputs found
Imitation Learning Applied to Embodied Conversational Agents
International audienceEmbodied Conversational Agents (ECAs) are emerging as a key component to allow human interact with machines. Applications are numerous and ECAs can reduce the aversion to interact with a machine by providing user-friendly interfaces. Yet, ECAs are still unable to produce social signals appropriately during their interaction with humans, which tends to make the interaction less instinctive. Especially, very little attention has been paid to the use of laughter in human-avatar interactions despite the crucial role played by laughter in human-human interaction. In this paper, methods for predicting when and how to laugh during an interaction for an ECA are proposed. Different Imitation Learning (also known as Apprenticeship Learning) algorithms are used in this purpose and a regularized classification algorithm is shown to produce good behavior on real data
Towards a more natural and intelligent interface with embodied conversation agent
Conversational agent also known as chatterbots are computer programs which are designed to converse like a human as much as their intelligent allows. In many ways, they are the embodiment of Turing's vision. The ability for computers to converse with human users using natural language would arguably increase their usefulness. Recent advances in Natural Language Processing (NLP) and Artificial Intelligence (AI) in general have advances this field in realizing the vision of a more humanoid interactive system. This paper presents and discusses the use of embodied conversation agent (ECA) for the imitation games. This paper also presents the technical design of our ECA and its performance. In the interactive media industry, it can also been observed that the ECA are getting popular
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
In this paper, an overview of human-robot interactive communication is
presented, covering verbal as well as non-verbal aspects of human-robot
interaction. Following a historical introduction, and motivation towards fluid
human-robot communication, ten desiderata are proposed, which provide an
organizational axis both of recent as well as of future research on human-robot
communication. Then, the ten desiderata are examined in detail, culminating to
a unifying discussion, and a forward-looking conclusion
On the simulation of interactive non-verbal behaviour in virtual humans
Development of virtual humans has focused mainly in two broad areas - conversational agents and computer game characters. Computer game characters have traditionally been action-oriented - focused on the game-play - and conversational agents have been focused on sensible/intelligent conversation. While virtual humans have incorporated some form of non-verbal behaviour, this has been quite limited and more importantly not connected or connected very loosely with the behaviour of a real human interacting with the virtual human - due to a lack of sensor data and no system to respond to that data. The interactional aspect of non-verbal behaviour is highly important in human-human interactions and previous research has demonstrated that people treat media (and therefore virtual humans) as real people, and so interactive non-verbal behaviour is also important in the development of virtual humans. This paper presents the challenges in creating virtual humans that are non-verbally interactive and drawing corollaries with the development history of control systems in robotics presents some approaches to solving these challenges - specifically using behaviour based systems - and shows how an order of magnitude increase in response time of virtual humans in conversation can be obtained and that the development of rapidly responding non-verbal behaviours can start with just a few behaviours with more behaviours added without difficulty later in development
Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
âThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." âCopyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.âThis position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
âThe Action of the Brainâ. Machine Models and Adaptive Functions in Turing and Ashby
Given the personal acquaintance between Alan M. Turing and W. Ross Ashby and the partial proximity of their research fields, a comparative view of Turingâs and Ashbyâs work on modelling âthe action of the brainâ (letter from Turing to Ashby, 1946) will help to shed light on the seemingly strict symbolic/embodied dichotomy: While it is clear that Turing was committed to formal, computational and Ashby to material, analogue methods of modelling, there is no straightforward mapping of these approaches onto symbol-based AI and embodiment-centered views respectively. Instead, it will be demonstrated that both approaches, starting from a formal core, were at least partly concerned with biological and embodied phenomena, albeit in revealingly distinct ways
Recommended from our members
Conversing with machines: Affective affinities with vocal bodies
This article examines how the emergence of speech-driven interfaces for computational devices alters our affective relationships with machines, and argues that the rise of intelligent personal assistants such as Siri, Watson and Alexa calls for the question of affect to be brought to the centre of discourse around artificial intelligence (AI). It departs from the early imaginings and manifestations of human-computer conversations in the work of Turing and Weizenbaum, then introduces a Spinozan framework for theorising the transmission of affect and its ethical implications. It examines the affective economy engendered by vocal interfaces, drawing on a range of theories which focus on sound not only as an object of study, but also as a conceptual paradigm. It concludes by arguing that the machine voice constitutes a form of embodiment, and that according computers this âbodyâ and inviting us to converse with them enhances our ability to enter into a sensuous relationship with them
Towards facial mimicry for a virtual human
Boukricha H, Wachsmuth I. Towards facial mimicry for a virtual human. In: Reichardt D, ed. Proceedings of the 4th Workshop on Emotion and Computing - Current Research and Future Impact. 2009: 32-39.Mimicking othersâ facial expressions is believed to be important in making virtual humans as more natural and believable. As result of an empirical study conducted with a virtual human a large face repertoire of about 6000 faces arranged in Pleasure Arousal Dominance (PAD-) space with respect to two dominance values (dominant vs. submissive) was obtained. Each face in the face repertoire consists of different intensities of the virtual humanâs facial muscle actions called Action Units (AUs), modeled following the Facial Action Coding System (FACS). Using this face repertoire an approach towards realizing facial
mimicry for a virtual human is topic of this paper. A preliminary evaluation of this first approach is realized with the basic emotions Happy and Angry
- âŠ