504,452 research outputs found
Auditory perception modulated by word reading
Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain
Recommended from our members
Generation of multi-modal dialogue for a net environment
In this paper an architecture and special purpose markup language for simulated affective face-to-face communication is presented. In systems based on this architecture, users will be able to watch embodied conversational agents interact with each other in virtual locations on the internet. The markup language, or Rich Representation Language (RRL), has been designed to provide an integrated representation of speech, gesture, posture and facial animation
Keeping it Real: Encountering Mixed Reality in igloo’s SwanQuake: House
This paper employs the writings of early twentieth-century phenomenologists to examine physical/virtual dualism a century later. It considers the nature of embodied experience in mixed reality environments through an analysis of the author’s encounter with an art installation. The paper reflects on post-Cartesian approaches to the body and new media, noting the resistance of the language of philosophy to the articulation of mixed reality as a concept. If the language of the field constructs dualism, and the cyborgian unitization of human/technology invokes responses of horror or pity, are we prepared socially or culturally to inhabit mixed reality environments as embodied beings
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Towards a Theory Grounded Theory of Language
In this paper, we build upon the idea of theory grounding and propose one specific form of theory grounding, a theory of language. Theory grounding is the idea that we can imbue our embodied artificially intelligent systems with theories by modeling the way humans, and specifically young children, develop skills with theories. Modeling theory development promises to increase the conceptual and behavioral flexibility of these systems. An example of theory development in children is the social understanding referred to as theory of mind. Language is a natural task for theory grounding because it is vital in symbolic skills and apparently necessary in developing theories. Word learning, and specifically developing a concept of words, is proposed as the first step in a theory grounded theory of language
Exploiting Deep Semantics and Compositionality of Natural Language for Human-Robot-Interaction
We develop a natural language interface for human robot interaction that
implements reasoning about deep semantics in natural language. To realize the
required deep analysis, we employ methods from cognitive linguistics, namely
the modular and compositional framework of Embodied Construction Grammar (ECG)
[Feldman, 2009]. Using ECG, robots are able to solve fine-grained reference
resolution problems and other issues related to deep semantics and
compositionality of natural language. This also includes verbal interaction
with humans to clarify commands and queries that are too ambiguous to be
executed safely. We implement our NLU framework as a ROS package and present
proof-of-concept scenarios with different robots, as well as a survey on the
state of the art
Politeness and Alignment in Dialogues with a Virtual Guide
Language alignment is something that happens automatically in dialogues between human speakers. The ability to align is expected to increase the believability of virtual dialogue agents. In this paper we extend the notion of alignment to affective language use, describing a model for dynamically adapting the linguistic style of a virtual agent to the level of politeness and formality detected in the user’s utterances. The model has been implemented in the Virtual Guide, an embodied conversational agent giving directions in a virtual environment. Evaluation shows that our formality model needs improvement, but that the politeness tactics used by the Guide are mostly interpreted as intended, and that the alignment to the user’s language is noticeable
- …
