20,614 research outputs found
Seven properties of self-organization in the human brain
The principle of self-organization has acquired a fundamental significance in the newly emerging field of computational philosophy. Self-organizing systems have been described in various domains in science and philosophy including physics, neuroscience, biology and medicine, ecology, and sociology. While system architecture and their general purpose may depend on domain-specific concepts and definitions, there are (at least) seven key properties of self-organization clearly identified in brain systems: 1) modular connectivity, 2) unsupervised learning, 3) adaptive ability, 4) functional resiliency, 5) functional plasticity, 6) from-local-to-global functional organization, and 7) dynamic system growth. These are defined here in the light of insight from neurobiology, cognitive neuroscience and Adaptive Resonance Theory (ART), and physics to show that self-organization achieves stability and functional plasticity while minimizing structural system complexity. A specific example informed by empirical research is discussed to illustrate how modularity, adaptive learning, and dynamic network growth enable stable yet plastic somatosensory representation for human grip force control. Implications for the design of “strong” artificial intelligence in robotics are brought forward
Embodied Robot Models for Interdisciplinary Emotion Research
Due to their complex nature, emotions cannot be properly understood from the perspective of a single discipline. In this paper, I discuss how the use of robots as models is beneficial for interdisciplinary emotion research. Addressing this issue through the lens of my own research, I focus on a critical analysis of embodied robots models of different aspects of emotion, relate them to theories in psychology and neuroscience, and provide representative examples. I discuss concrete ways in which embodied robot models can be used to carry out interdisciplinary emotion research, assessing their contributions: as hypothetical models, and as operational models of specific emotional phenomena, of general emotion principles, and of specific emotion ``dimensions''. I conclude by discussing the advantages of using embodied robot models over other models.Peer reviewe
From Imprinting to Adaptation: Building a History of Affective Interaction
We present a Perception-Action architecture and experiments to simulate imprinting—the establishment of strong attachment links with a “caregiver”—in a robot. Following recent theories, we do not consider imprinting as rigidly timed and irreversible, but as a more flexible phenomenon that allows for further adaptation as a result of reward-based learning through experience. Our architecture reconciles these two types of perceptual learning traditionally considered as different and even incompatible. After the initial imprinting, adaptation is achieved in the context of a history of “affective” interactions between the robot and a human, driven by “distress” and “comfort” responses in the robot
Open-Ended Evolutionary Robotics: an Information Theoretic Approach
This paper is concerned with designing self-driven fitness functions for
Embedded Evolutionary Robotics. The proposed approach considers the entropy of
the sensori-motor stream generated by the robot controller. This entropy is
computed using unsupervised learning; its maximization, achieved by an on-board
evolutionary algorithm, implements a "curiosity instinct", favouring
controllers visiting many diverse sensori-motor states (sms). Further, the set
of sms discovered by an individual can be transmitted to its offspring, making
a cultural evolution mode possible. Cumulative entropy (computed from ancestors
and current individual visits to the sms) defines another self-driven fitness;
its optimization implements a "discovery instinct", as it favours controllers
visiting new or rare sensori-motor states. Empirical results on the benchmark
problems proposed by Lehman and Stanley (2008) comparatively demonstrate the
merits of the approach
What should a robot learn from an infant? Mechanisms of action interpretation and observational learning in infancy
The paper provides a summary of our
recent research on preverbal infants (using
violation-of-expectation and observational
learning paradigms) demonstrating that one-year-olds interpret and draw systematic
inferences about other’s goal-directed actions,
and can rely on such inferences when imitating
other’s actions or emulating their goals. To
account for these findings it is proposed that one-year-olds apply a non-mentalistic action
interpretational system, the ’teleological stance’
that represents actions by relating relevant
aspects of reality (action, goal-state, and
situational constraints) through the principle of
rational action, which assumes that actions
function to realize goal-states by the most
efficient means available in the actor’s situation.
The relevance of these research findings and the
proposed theoretical model for how to realize the
goal of epigenetic robotics of building a ’socially
relevant’ humanoid robot is discussed
Recommended from our members
Do Balance Demands Induce Shifts in Visual Proprioception in Crawling Infants?
The onset of hands-and-knees crawling during the latter half of the first year of life heralds pervasive changes in a range of psychological functions. Chief among these changes is a clear shift in visual proprioception, evident in the way infants use patterns of optic flow in the peripheral field of view to regulate their postural sway. This shift is thought to result from consistent exposure in the newly crawling infant to different patterns of optic flow in the central field of view and the periphery and the need to concurrently process information about self-movement, particularly postural sway, and the environmental layout during crawling. Researchers have hypothesized that the demands on the infant's visual system to concurrently process information about self-movement and the environment press the infant to differentiate and functionalize peripheral optic flow for the control of balance during locomotion so that the central field of view is freed to engage in steering and monitoring the surface and potentially other tasks. In the current experiment, we tested whether belly crawling, a mode of locomotion that places negligible demands on the control of balance, leads to the same changes in the functional utilization of peripheral optic flow for the control of postural sway as hands-and-knees crawling. We hypothesized that hands-and-knees crawlers (n = 15) would show significantly higher postural responsiveness to movements of the side walls and ceiling of a moving room than same-aged pre-crawlers (n = 19) and belly crawlers (n = 15) with an equivalent amount of crawling experience. Planned comparisons confirmed the hypothesis. Visual-postural coupling in the hands-and-knees crawlers was significantly higher than in the belly crawlers and pre-crawlers. These findings suggest that the balance demands associated with hands-and-knees crawling may be an important contributor to the changes in visual proprioception that have been demonstrated in several experiments to follow hands-and-knees crawling experience. However, we also consider that belly crawling may have less potent effects on visual proprioception because it is an effortful and attention-demanding mode of locomotion, thus leaving less attentional capacity available to notice changing relations between the self and the environment
Lifeworld Analysis
We argue that the analysis of agent/environment interactions should be
extended to include the conventions and invariants maintained by agents
throughout their activity. We refer to this thicker notion of environment as a
lifeworld and present a partial set of formal tools for describing structures
of lifeworlds and the ways in which they computationally simplify activity. As
one specific example, we apply the tools to the analysis of the Toast system
and show how versions of the system with very different control structures in
fact implement a common control structure together with different conventions
for encoding task state in the positions or states of objects in the
environment.Comment: See http://www.jair.org/ for any accompanying file
- …