131 research outputs found

    Kick control: using the attracting states arising within the sensorimotor loop of self-organized robots as motor primitives

    Full text link
    Self-organized robots may develop attracting states within the sensorimotor loop, that is within the phase space of neural activity, body, and environmental variables. Fixpoints, limit cycles, and chaotic attractors correspond in this setting to a non-moving robot, to directed, and to irregular locomotion respectively. Short higher-order control commands may hence be used to kick the system from one self-organized attractor robustly into the basin of attraction of a different attractor, a concept termed here as kick control. The individual sensorimotor states serve in this context as highly compliant motor primitives. We study different implementations of kick control for the case of simulated and real-world wheeled robots, for which the dynamics of the distinct wheels is generated independently by local feedback loops. The feedback loops are mediated by rate-encoding neurons disposing exclusively of propriosensoric inputs in terms of projections of the actual rotational angle of the wheel. The changes of the neural activity are then transmitted into a rotational motion by a simulated transmission rod akin to the transmission rods used for steam locomotives. We find that the self-organized attractor landscape may be morphed both by higher-level control signals, in the spirit of kick control, and by interacting with the environment. Bumping against a wall destroys the limit cycle corresponding to forward motion, with the consequence that the dynamical variables are then attracted in phase space by the limit cycle corresponding to backward moving. The robot, which does not dispose of any distance or contact sensors, hence reverses direction autonomously.Comment: 17 pages, 9 figure

    Chaotic exploration and learning of locomotion behaviours

    Get PDF
    We present a general and fully dynamic neural system, which exploits intrinsic chaotic dynamics, for the real-time goal-directed exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modeled as a network of neural oscillators that are initially coupled only through physical embodiment, and goal-directed exploration of coordinated motor patterns is achieved by chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organized dynamics, each of which is a candidate for a locomotion behavior. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states, using its intrinsic chaotic dynamics as a driving force, and stabilizes on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced, which results in an increased diversity of motor outputs, thus achieving multiscale exploration. A rhythmic pattern discovered by this process is memorized and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronization method. Our results show that the novel neurorobotic system is able to create and learn multiple locomotion behaviors for a wide range of body configurations and physical environments and can readapt in realtime after sustaining damage

    Behavior control in the sensorimotor loop with short-term synaptic dynamics induced by self-regulating neurons

    Get PDF
    The behavior and skills of living systems depend on the distributed control provided by specialized and highly recurrent neural networks. Learning and memory in these systems is mediated by a set of adaptation mechanisms, known collectively as neuronal plasticity. Translating principles of recurrent neural control and plasticity to artificial agents has seen major strides, but is usually hampered by the complex interactions between the agent's body and its environment. One of the important standing issues is for the agent to support multiple stable states of behavior, so that its behavioral repertoire matches the requirements imposed by these interactions. The agent also must have the capacity to switch between these states in time scales that are comparable to those by which sensory stimulation varies. Achieving this requires a mechanism of short-term memory that allows the neurocontroller to keep track of the recent history of its input, which finds its biological counterpart in short-term synaptic plasticity. This issue is approached here by deriving synaptic dynamics in recurrent neural networks. Neurons are introduced as self-regulating units with a rich repertoire of dynamics. They exhibit homeostatic properties for certain parameter domains, which result in a set of stable states and the required short-term memory. They can also operate as oscillators, which allow them to surpass the level of activity imposed by their homeostatic operation conditions. Neural systems endowed with the derived synaptic dynamics can be utilized for the neural behavior control of autonomous mobile agents. The resulting behavior depends also on the underlying network structure, which is either engineered or developed by evolutionary techniques. The effectiveness of these self-regulating units is demonstrated by controlling locomotion of a hexapod with 18 degrees of freedom, and obstacle-avoidance of a wheel-driven robot. © 2014 Toutounji and Pasemann

    Imitation Learning of Motion Coordination in Robots:a Dynamical System Approach

    Get PDF
    The ease with which humans coordinate all their limbs is fascinating. Such a simplicity is the result of a complex process of motor coordination, i.e. the ability to resolve the biomechanical redundancy in an efficient and repeatable manner. Coordination enables a wide variety of everyday human activities from filling in a glass with water to pair figure skating. Therefore, it is highly desirable to endow robots with similar skills. Despite the apparent diversity of coordinated motions, all of them share a crucial similarity: these motions are dictated by underlying constraints. The constraints shape the formation of the coordination patterns between the different degrees of freedom. Coordination constraints may take a spatio-temporal form; for instance, during bimanual object reaching or while catching a ball on the fly. They also may relate to the dynamics of the task; for instance, when one applies a specific force profile to carry a load. In this thesis, we develop a framework for teaching coordination skills to robots. Coordination may take different forms, here, we focus on teaching a robot intra-limb and bimanual coordination, as well as coordination with a human during physical collaborative tasks. We use tools from well-established domains of Bayesian semiparametric learning (Gaussian Mixture Models and Regression, Hidden Markov Models), nonlinear dynamics, and adaptive control. We take a biologically inspired approach to robot control. Specifically, we adopt an imitation learning perspective to skill transfer, that offers a seamless and intuitive way of capturing the constraints contained in natural human movements. As the robot is taught from motion data provided by a human teacher, we exploit evidence from human motor control of the temporal evolution of human motions that may be described by dynamical systems. Throughout this thesis, we demonstrate that the dynamical system view on movement formation facilitates coordination control in robots. We explain how our framework for teaching coordination to a robot is built up, starting from intra-limb coordination and control, moving to bimanual coordination, and finally to physical interaction with a human. The dissertation opens with the discussion of learning discrete task-level coordination patterns, such as spatio-temporal constraints emerging between the two arms in bimanual manipulation tasks. The encoding of bimanual constraints occurs at the task level and proceeds through a discretization of the task as sequences of bimanual constraints. Once the constraints are learned, the robot utilizes them to couple the two dynamical systems that generate kinematic trajectories for the hands. Explicit coupling of the dynamical systems ensures accurate reproduction of the learned constraints, and proves to be crucial for successful accomplishment of the task. In the second part of this thesis, we consider learning one-arm control policies. We present an approach to extracting non-linear autonomous dynamical systems from kinematic data of arbitrary point-to-point motions. The proposed method aims to tackle the fundamental questions of learning robot coordination: (i) how to infer a motion representation that captures a multivariate coordination pattern between degrees of freedom and that generalizes this pattern to unseen contexts; (ii) whether the policy learned directly from demonstrations can provide robustness against spatial and temporal perturbations. Finally, we demonstrate that the developed dynamical system approach to coordination may go beyond kinematic motion learning. We consider physical interactions between a robot and a human in situations where they jointly perform manipulation tasks; in particular, the problem of collaborative carrying and positioning of a load. We extend the approach proposed in the second part of this thesis to incorporate haptic information into the learning process. As a result, the robot adapts its kinematic motion plan according to human intentions expressed through the haptic signals. Even after the robot has learned the task model, the human still remains a complex contact environment. To ensure robustness of the robot behavior in the face of the variability inherent to human movements, we wrap the learned task model in an adaptive impedance controller with automatic gain tuning. The techniques, developed in this thesis, have been applied to enable learning of unimanual and bimanual manipulation tasks on the robotics platforms HOAP-3, KATANA, and i-Cub, as well as to endow a pair of simulated robots with the ability to perform a manipulation task in the physical collaboration

    Introducing a Pictographic Language for Envisioning a Rich Variety of Enactive Systems with Different Degrees of Complexity

    Get PDF
    Notwithstanding the considerable amount of progress that has been made in recent years, the parallel fields of cognitive science and cognitive systems lack a unifying methodology for describing, understanding, simulating and implementing advanced cognitive behaviours. Growing interest in ’enactivism’ - as pioneered by the Chilean biologists Humberto Maturana and Francisco Varela - may lead to new perspectives in these areas, but a common framework for expressing many of the key concepts is still missing. This paper attempts to lay a tentative foundation in that direction by extending Maturana and Varela’s pictographic depictions of autopoietic unities to create a rich visual language for envisioning a wide range of enactive systems - natural or artificial - with different degrees of complexity. It is shown how such a diagrammatic taxonomy can help in the comprehension of important relationships between a variety of complex concepts from a pan-theoretic perspective. In conclusion, it is claimed that visual language is not only valuable for teaching and learning, but also offers important insights into the design and implementation of future advanced robotic systems

    Embodied neuromorphic intelligence

    Full text link
    The design of robots that interact autonomously with the environment and exhibit complex behaviours is an open challenge that can benefit from understanding what makes living beings fit to act in the world. Neuromorphic engineering studies neural computational principles to develop technologies that can provide a computing substrate for building compact and low-power processing systems. We discuss why endowing robots with neuromorphic technologies – from perception to motor control – represents a promising approach for the creation of robots which can seamlessly integrate in society. We present initial attempts in this direction, highlight open challenges, and propose actions required to overcome current limitations

    Emergent coordination between humans and robots

    Get PDF
    Emergent coordination or movement synchronization is an often observed phenomenon in human behavior. Humans synchronize their gait when walking next to each other, they synchronize their postural sway when standing closely, and they also synchronize their movement behavior in many other situations of daily life. Why humans are doing this is an important question of ongoing research in many disciplines: apparently movement synchronization plays a role in children’s development and learning; it is related to our social and emotional behavior in interaction with others; it is an underlying principle in the organization of communication by means of language and gesture; and finally, models explaining movement synchronization between two individuals can also be extended to group behavior. Overall, one can say that movement synchronization is an important principle of human interaction behavior. Besides interacting with other humans, in recent years humans do more and more interact with technology. This was first expressed in the interaction with machines in industrial settings, was taken further to human-computer interaction and is now facing a new challenge: the interaction with active and autonomous machines, the interaction with robots. If the vision of today’s robot developers comes true, in the near future robots will be fully integrated not only in our workplace, but also in our private lives. They are supposed to support humans in activities of daily living and even care for them. These circumstances however require the development of interactional principles which the robot can apply to the direct interaction with humans. In this dissertation the problem of robots entering the human society will be outlined and the need for the exploration of human interaction principles that are transferable to human-robot interaction will be emphasized. Furthermore, an overview on human movement synchronization as a very important phenomenon in human interaction will be given, ranging from neural correlates to social behavior. The argument of this dissertation is that human movement synchronization is a simple but striking human interaction principle that can be applied in human-robot interaction to support human activity of daily living, demonstrated on the example of pick-and-place tasks. This argument is based on five publications. In the first publication, human movement synchronization is explored in goal-directed tasks which bare similar requirements as pick-and-place tasks in activities of daily living. In order to explore if a merely repetitive action of the robot is sufficient to encourage human movement synchronization, the second publication reports a human-robot interaction study in which a human interacts with a non-adaptive robot. Here however, movement synchronization between human and robot does not emerge, which underlines the need for adaptive mechanisms. Therefore, in the third publication, human adaptive behavior in goal-directed movement synchronization is explored. In order to make the findings from the previous studies applicable to human-robot interaction, in the fourth publication the development of an interaction model based on dynamical systems theory is outlined which is ready for implementation on a robotic platform. Following this, a brief overview on a first human-robot interaction study based on the developed interaction model is provided. The last publication describes an extension of the previous approach which also includes the human tendency to make use of events to adapt their movements to. Here, also a first human-robot interaction study is reported which confirms the applicability of the model. The dissertation concludes with a discussion on the presented findings in the light of human-robot interaction and psychological aspects of joint action research as well as the problem of mutual adaptation.Spontan auftretende Koordination oder Bewegungssynchronisierung ist ein hĂ€ufig zu beobachtendes PhĂ€nomen im Verhalten von Menschen. Menschen synchronisieren ihre Schritte beim nebeneinander hergehen, sie synchronisieren die Schwingbewegung zum Ausgleich der Körperbalance wenn sie nahe beieinander stehen und sie synchronisieren ihr Bewegungsverhalten generell in vielen weiteren Handlungen des tĂ€glichen Lebens. Die Frage nach dem warum ist eine Frage mit der sich die Forschung in der Psychologie, Neuro- und Bewegungswissenschaft aber auch in der Sozialwissenschaft nach wie vor beschĂ€ftigt: offenbar spielt die Bewegungssynchronisierung eine Rolle in der kindlichen Entwicklung und beim Erlernen von FĂ€higkeiten und Verhaltensmustern; sie steht in direktem Bezug zu unserem sozialen Verhalten und unserer emotionalen Wahrnehmung in der Interaktion mit Anderen; sie ist ein grundlegendes Prinzip in der Organisation von Kommunikation durch Sprache oder Gesten; außerdem können Modelle, die Bewegungssynchronisierung zwischen zwei Individuen erklĂ€ren, auch auf das Verhalten innerhalb von Gruppen ausgedehnt werden. Insgesamt kann man also sagen, dass Bewegungssynchronisierung ein wichtiges Prinzip im menschlichen Interaktionsverhalten darstellt. Neben der Interaktion mit anderen Menschen interagieren wir in den letzten Jahren auch zunehmend mit der uns umgebenden Technik. Hier fand zunĂ€chst die Interaktion mit Maschinen im industriellen Umfeld Beachtung, spĂ€ter die Mensch-Computer-Interaktion. Seit kurzem sind wir jedoch mit einer neuen Herausforderung konfrontiert: der Interaktion mit aktiven und autonomen Maschinen, Maschinen die sich bewegen und aktiv mit Menschen interagieren, mit Robotern. Sollte die Vision der heutigen Roboterentwickler Wirklichkeit werde, so werden Roboter in der nahen Zukunft nicht nur voll in unser Arbeitsumfeld integriert sein, sondern auch in unser privates Leben. Roboter sollen den Menschen in ihren tĂ€glichen AktivitĂ€ten unterstĂŒtzen und sich sogar um sie kĂŒmmern. Diese UmstĂ€nde erfordern die Entwicklung von neuen Interaktionsprinzipien, welche Roboter in der direkten Koordination mit dem Menschen anwenden können. In dieser Dissertation wird zunĂ€chst das Problem umrissen, welches sich daraus ergibt, dass Roboter zunehmend Einzug in die menschliche Gesellschaft finden. Außerdem wird die Notwendigkeit der Untersuchung menschlicher Interaktionsprinzipien, die auf die Mensch-Roboter-Interaktion transferierbar sind, hervorgehoben. Die Argumentation der Dissertation ist, dass die menschliche Bewegungssynchronisierung ein einfaches aber bemerkenswertes menschliches Interaktionsprinzip ist, welches in der Mensch-Roboter-Interaktion angewendet werden kann um menschliche AktivitĂ€ten des tĂ€glichen Lebens, z.B. Aufnahme-und-Ablege-Aufgaben (pick-and-place tasks), zu unterstĂŒtzen. Diese Argumentation wird auf fĂŒnf Publikationen gestĂŒtzt. In der ersten Publikation wird die menschliche Bewegungssynchronisierung in einer zielgerichteten Aufgabe untersucht, welche die gleichen Anforderungen erfĂŒllt wie die Aufnahme- und Ablageaufgaben des tĂ€glichen Lebens. Um zu untersuchen ob eine rein repetitive Bewegung des Roboters ausreichend ist um den Menschen zur Etablierung von Bewegungssynchronisierung zu ermutigen, wird in der zweiten Publikation eine Mensch-Roboter-Interaktionsstudie vorgestellt in welcher ein Mensch mit einem nicht-adaptiven Roboter interagiert. In dieser Studie wird jedoch keine Bewegungssynchronisierung zwischen Mensch und Roboter etabliert, was die Notwendigkeit von adaptiven Mechanismen unterstreicht. Daher wird in der dritten Publikation menschliches Adaptationsverhalten in der Bewegungssynchronisierung in zielgerichteten Aufgaben untersucht. Um die so gefundenen Mechanismen fĂŒr die Mensch-Roboter Interaktion nutzbar zu machen, wird in der vierten Publikation die Entwicklung eines Interaktionsmodells basierend auf Dynamischer Systemtheorie behandelt. Dieses Modell kann direkt in eine Roboterplattform implementiert werden. Anschließend wird kurz auf eine erste Studie zur Mensch- Roboter Interaktion basierend auf dem entwickelten Modell eingegangen. Die letzte Publikation beschreibt eine Weiterentwicklung des bisherigen Vorgehens welche der Tendenz im menschlichen Verhalten Rechnung trĂ€gt, die Bewegungen an Ereignissen auszurichten. Hier wird außerdem eine erste Mensch-Roboter- Interaktionsstudie vorgestellt, die die Anwendbarkeit des Modells bestĂ€tigt. Die Dissertation wird mit einer Diskussion der prĂ€sentierten Ergebnisse im Kontext der Mensch-Roboter-Interaktion und psychologischer Aspekte der Interaktionsforschung sowie der Problematik von beiderseitiger AdaptivitĂ€t abgeschlossen

    Survey: Robot Programming by Demonstration

    Get PDF
    Robot PbD started about 30 years ago, growing importantly during the past decade. The rationale for moving from purely preprogrammed robots to very flexible user-based interfaces for training the robot to perform a task is three-fold. First and foremost, PbD, also referred to as {\em imitation learning} is a powerful mechanism for reducing the complexity of search spaces for learning. When observing either good or bad examples, one can reduce the search for a possible solution, by either starting the search from the observed good solution (local optima), or conversely, by eliminating from the search space what is known as a bad solution. Imitation learning is, thus, a powerful tool for enhancing and accelerating learning in both animals and artifacts. Second, imitation learning offers an implicit means of training a machine, such that explicit and tedious programming of a task by a human user can be minimized or eliminated (Figure \ref{fig:what-how}). Imitation learning is thus a ``natural'' means of interacting with a machine that would be accessible to lay people. And third, studying and modeling the coupling of perception and action, which is at the core of imitation learning, helps us to understand the mechanisms by which the self-organization of perception and action could arise during development. The reciprocal interaction of perception and action could explain how competence in motor control can be grounded in rich structure of perceptual variables, and vice versa, how the processes of perception can develop as means to create successful actions. PbD promises were thus multiple. On the one hand, one hoped that it would make the learning faster, in contrast to tedious reinforcement learning methods or trials-and-error learning. On the other hand, one expected that the methods, being user-friendly, would enhance the application of robots in human daily environments. Recent progresses in the field, which we review in this chapter, show that the field has make a leap forward the past decade toward these goals and that these promises may be fulfilled very soon

    Chaotic exploration and learning of locomotor behaviours

    Get PDF
    Recent developments in the embodied approach to understanding the generation of adaptive behaviour, suggests that the design of adaptive neural circuits for rhythmic motor patterns should not be done in isolation from an appreciation, and indeed exploitation, of neural-body-environment interactions. Utilising spontaneous mutual entrainment between neural systems and physical bodies provides a useful passage to the regions of phase space which are naturally structured by the neuralbody- environmental interactions. A growing body of work has provided evidence that chaotic dynamics can be useful in allowing embodied systems to spontaneously explore potentially useful motor patterns. However, up until now there has been no general integrated neural system that allows goal-directed, online, realtime exploration and capture of motor patterns without recourse to external monitoring, evaluation or training methods. For the first time, we introduce such a system in the form of a fully dynamic neural system, exploiting intrinsic chaotic dynamics, for the exploration and learning of the possible locomotion patterns of an articulated robot of an arbitrary morphology in an unknown environment. The controller is modelled as a network of neural oscillators which are coupled only through physical embodiment, and goal directed exploration of coordinated motor patterns is achieved by a chaotic search using adaptive bifurcation. The phase space of the indirectly coupled neural-body-environment system contains multiple transient or permanent self-organised dynamics each of which is a candidate for a locomotion behaviour. The adaptive bifurcation enables the system orbit to wander through various phase-coordinated states using its intrinsic chaotic dynamics as a driving force and stabilises the system on to one of the states matching the given goal criteria. In order to improve the sustainability of useful transient patterns, sensory homeostasis has been introduced which results in an increased diversity of motor outputs, thus achieving multi-scale exploration. A rhythmic pattern discovered by this process is memorised and sustained by changing the wiring between initially disconnected oscillators using an adaptive synchronisation method. The dynamical nature of the weak coupling through physical embodiment allows this adaptive weight learning to be easily integrated, thus forming a continuous exploration-learning system. Our result shows that the novel neuro-robotic system is able to create and learn a number of emergent locomotion behaviours for a wide range of body configurations and physical environment, and can re-adapt after sustaining damage. The implications and analyses of these results for investigating the generality and limitations of the proposed system are discussed
    • 

    corecore