11,835 research outputs found
A cognitive architecture for inner speech
A cognitive architecture for inner speech is presented. It is based on the Standard Model of Mind, integrated with modules for self- talking. Briefly, the working memory of the proposed architecture includes the phonological loop as a component which manages the exchanging information between the phonological store and the articulatory control system. The inner dialogue is modeled as a loop where the phonological store hears the inner voice produced by the hidden articulator process. A central executive module drives the whole system, and contributes to the generation of conscious thoughts by retrieving information from long-term memory. The surface form of thoughts thus emerges by the phonological loop. Once a conscious thought is elicited by inner speech, the perception of new context takes place and then repeating the cognitive loop. A preliminary formalization of some of the described processes by event cal- culus, and early results of their implementation on the humanoid robot Pepper by SoftBank Robotics are discussed
Blurring Two Conceptions of Subjective Experience: Folk versus Philosophical Phenomenality
Philosophers and psychologists have experimentally explored various aspects of people\u27s understandings of subjective experience based on their responses to questions about whether robots âsee redâ or âfeel frustrated,â but the intelligibility of such questions may well presuppose that people understand robots as experiencers in the first place. Departing from the standard approach, I develop an experimental framework that distinguishes between âphenomenal consciousnessâ as it is applied to a subject (an experiencer) and to an (experiential) mental state and experimentally test folk understandings of both subjective experience and experiencers. My findings (1) reveal limitations in experimental approaches using âartificial experiencersâ like robots, (2) indicate that the standard philosophical conception of subjective experience in terms of qualia is distinct from that of the folk, and (3) show that folk intuitions do support a conception of qualia that departs from the philosophical conception in that it is physical rather than metaphysical. These findings have implications for the âhard problemâ of consciousness
Speech Development by Imitation
The Double Cone Model (DCM) is a model
of how the brain transforms sensory input to
motor commands through successive stages of
data compression and expansion. We have
tested a subset of the DCM on speech recognition, production and imitation. The experiments show that the DCM is a good candidate
for an artificial speech processing system that
can develop autonomously. We show that the
DCM can learn a repertoire of speech sounds
by listening to speech input. It is also able to
link the individual elements of speech to sequences that can be recognized or reproduced,
thus allowing the system to imitate spoken
language
Developing Self-Awareness in Robots via Inner Speech
The experience of inner speech is a common one. Such a dialogue accompanies the introspection of mental life and fulfills essential roles in human behavior, such as self-restructuring, self-regulation, and re-focusing on attentional resources. Although the underpinning of inner speech is mostly investigated in psychological and philosophical fields, the research in robotics generally does not address such a form of self-aware behavior. Existing models of inner speech inspire computational tools to provide a robot with this form of self-awareness. Here, the widespread psychological models of inner speech are reviewed, and a cognitive architecture for a robot implementing such a capability is outlined in a simplified setup
A calculus for robot inner speech and self-awareness
The inner speech is the common mental experience the humans have when they dialogue with themselves. It is widely acknowledged that inner speech is related to awareness and self-awareness. The inner speech reproduces and expands in the mind social and physical sources of awareness. In this preliminary work, a calculus based on a first-order modal logic to automate inner speech is presented. It attempts to make the existing inner speech theories suitable for robot. By making robot able to talk to itself, it is possible to analyze the role of inner speech in robot awareness and self-awareness, opening new interesting research scenarios not yet investigated
Brain-inspired conscious computing architecture
What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articonâs claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human
The Role of Valence in Intentionality
Functional intentionality is the dominant theory about how mental states come to have the content that they do. Phenomenal intentionality is an increasingly popular alternative to that orthodoxy, claiming that intentionality cannot be functionalized and
that nothing is a mental state with intentional content unless it is phenomenally conscious. There is a consensus among defenders of phenomenal intentionality that the kind of phenomenology that is
both necessary and sufficient for having a belief that "there is a tree
in the quad" is that the agent be consciously aware of the meaning
of "tree" and "quad". On this theory, experiences with a valence
-- experiences like happiness and sadness, satisfaction and frustration
-- are irrelevant to intentionality. This paper challenges that
assumption and considers several versions of "valent phenomenal
intentionality" according to which a capacity for valent conscious
experiences is either a necessary or a sufficient condition for intentionality
(or both)
- âŠ