80,106 research outputs found
Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
âThis material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." âCopyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.âThis position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
The Backside of Habit: Notes on Embodied Agency and the Functional Opacity of the Medium
In this chapter what I call the âbacksideâ of habit is explored. I am interested in the philosophical implications of the physical and physiological processes that mediate, and which allow for what comes to appear as almost magic; namely the various sensorimotor associations and integrations that allows us to replay our past experiences, and to in a certain sense perceive potential futures, and to act and bring about anticipated outcomes â without quite knowing how. Thus, the term âbacksideâ is meant to refer both the actual mediation and the epistemic opacity of these backstage intermediaries that allow for the front stage magic. The question is if the epistemic complexities around sensorimotor mediation gives us valuable insights into the nature of human agency and further how it might begin to show us new ways to think of the mind as truly embodied yet not reducible to any finite body-as-object
Language as a disruptive technology: Abstract concepts, embodiment and the flexible mind
A growing body of evidence suggests that cognition is embodied and grounded. Abstract concepts, though, remain a significant theoretical chal- lenge. A number of researchers have proposed that language makes an important contribution to our capacity to acquire and employ concepts, particularly abstract ones. In this essay, I critically examine this suggestion and ultimately defend a version of it. I argue that a successful account of how language augments cognition should emphasize its symbolic properties and incorporate a view of embodiment that recognizes the flexible, multi- modal and task-related nature of action, emotion and perception systems. On this view, language is an ontogenetically disruptive cognitive technology that expands our conceptual reach
Presenting in Virtual Worlds: An Architecture for a 3D Anthropomorphic Presenter
Multiparty-interaction technology is changing entertainment, education, and training. Deployed examples of such technology include embodied agents and robots that act as a museum guide, a news presenter, a teacher, a receptionist, or someone trying to sell you insurance, homes, or tickets. In all these cases, the embodied agent needs to explain and describe. This article describes the design of a 3D virtual presenter that uses different output channels (including speech and animation of posture, pointing, and involuntary movements) to present and explain. The behavior is scripted and synchronized with a 2D display containing associated text and regions (slides, drawings, and paintings) at which the presenter can point. This article is part of a special issue on interactive entertainment
Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter explaining 2D-Presented Information
Entertainment, education and training are changing because of multi-party interaction technology. In the past we have seen the introduction of embodied agents and robots that take the role of a museum guide, a news presenter, a teacher, a receptionist, or someone who is trying to sell you insurances, houses or tickets. In all these cases the embodied agent needs to explain and describe. In this paper we contribute the design of a 3D virtual presenter that uses different output channels to present and explain. Speech and animation (posture, pointing and involuntary movements) are among these channels. The behavior is scripted and synchronized with the display of a 2D presentation with associated text and regions that can be pointed at (sheets, drawings, and paintings). In this paper the emphasis is on the interaction between 3D presenter and the 2D presentation
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Interacting Unities: An Agent-Based System
Recently architects have been inspired by Thompsonis Cartesian deformations and Waddingtonis flexible topological surface to work within a dynamic field characterized by forces. In this more active space of interactions, movement is the medium through which form evolves. This paper explores the interaction between pedestrians and their environment by regarding it as a process occurring between the two. It is hypothesized that the recurrent interaction between pedestrians and environment can lead to a structural coupling between those elements. Every time a change occurs in each one of them, as an expression of its own structural dynamics, it triggers changes to the other one. An agent-based system has been developed in order to explore that interaction, where the two interacting elements, agents (pedestrians) and environment, are autonomous units with a set of internal rules. The result is a landscape where each agent locally modifies its environment that in turn affects its movement, while the other agents respond to the new environment at a later time, indicating that the phenomenon of stigmergy is possible to take place among interactions with human analogy. It is found that it is the environmentis internal rules that determine the nature and extent of change
Recommended from our members
Simulating emotional reactions in medical dramas
Presenting information on emotionally charged topics is a delicate task: if bare facts alone are conveyed, there is a risk of boring the audience, or coming across as cold and unfeeling; on the other hand, emotional presentation can be appropriate when carefully handled, but when overdone or mishandled risks being perceived as patronising or in poor taste. When Natural Language Generation (NLG) systems present emotionally charged information linguistically, by generating scripts for embodied agents, emotional/affective aspects cannot be ignored. It is important to ensure that viewers consider the presentation appropriate and sympathetic.
We are investigating the role of affect in communicating medical information in the context of an NLG system that generates short medical dramas enacted by embodied agents. The dramas have both an informational and an educational purpose in that they help patients review their medical histories whilst receiving explanations of less familiar medical terms and demonstrations of their usage. The dramas are also personalised since they are generated from the patients' own medical records. We view generation of natural/appropriate emotional language as a way to engage and maintain the viewers' attention. For our medical setting, we hypothesize that viewers will consider dialogues more natural when they have an enthusiastic and sympathetic emotional tone. Our second hypothesis proposes that such dialogues are also better for engaging the viewers' attention.
As well as describing our NLG system for generating natural emotional language in medical dialogue, we present a pilot study with which we investigate our two hypotheses. Our results were not quite as unequivocal as we had hoped. Firstly, our participants did notice whether a character sympathised with the patient and was enthusiastic. This did not, however, lead them to judge such a character as behaving more naturally or the dialogue as being more engaging. However, when pooling data from our two conditions, dialogues with versus dialogues without emotionally appropriate language use, we discovered, somewhat surprisingly, that participants did consider a dialogue more engaging if they believed that the characters showed sympathy towards the patient, were not cold and unfeeling, and were natural (true for the female agent only)
- âŚ