1,969 research outputs found

    The Morphological Computation Principles as a New Paradigm for Robotic Design

    Get PDF
    A theory, by definition, is a generalization of some phenomenon observations, and a principle is a law or a rule that should be followed as a guideline. Their formalization is a creative process, which faces specific and attested steps. The following sections reproduce this logical flow by expressing the principle of Morphological Computation as a timeline: firstly the observations of this phenomenon in Nature has been reported in relation with some recent theories, afterward it has been linked with the current applications in artificial systems and finally the further applications, challenges and objectives will project this principle into future scenarios

    Opinions and Outlooks on Morphological Computation

    Get PDF
    Morphological Computation is based on the observation that biological systems seem to carry out relevant computations with their morphology (physical body) in order to successfully interact with their environments. This can be observed in a whole range of systems and at many different scales. It has been studied in animals – e.g., while running, the functionality of coping with impact and slight unevenness in the ground is "delivered" by the shape of the legs and the damped elasticity of the muscle-tendon system – and plants, but it has also been observed at the cellular and even at the molecular level – as seen, for example, in spontaneous self-assembly. The concept of morphological computation has served as an inspirational resource to build bio-inspired robots, design novel approaches for support systems in health care, implement computation with natural systems, but also in art and architecture. As a consequence, the field is highly interdisciplinary, which is also nicely reflected in the wide range of authors that are featured in this e-book. We have contributions from robotics, mechanical engineering, health, architecture, biology, philosophy, and others

    Resonant hopping of a robot controlled by an artificial neural oscillator

    Full text link
    "The bouncing gaits of terrestrial animals (hopping, running, trotting) can be modeled as a hybrid dynamic system, with spring-mass dynamics during stance and ballistic motion during the aerial phase. We used a simple hopping robot controlled by an artificial neural oscillator to test the ability of the neural oscillator to adaptively drive this hybrid dynamic system. The robot had a single joint, actuated by an artificial pneumatic muscle in series with a tendon spring. We examined how the oscillator-robot system responded to variation in two neural control parameters: descending neural drive and neuromuscular gain. We also tested the ability of the oscillator-robot system to adapt to variations in mechanical properties by changing the series and parallel spring stiffnesses. Across a 100-fold variation in both supraspinal gain and muscle gain, hopping frequency changed by less than 10%. The neural oscillator consistently drove the system at the resonant half-period for the stance phase, and adapted to a new resonant half-period when the muscle series and parallel stiffnesses were altered. Passive cycling of elastic energy in the tendon accounted for 70-79% of the mechanical work done during each hop cycle. Our results demonstrate that hopping dynamics were largely determined by the intrinsic properties of the mechanical system, not the specific choice of neural oscillator parameters. The findings provide the first evidence that an artificial neural oscillator will drive a hybrid dynamic system at partial resonance."http://deepblue.lib.umich.edu/bitstream/2027.42/64204/1/bb8_2_026001.pd

    Fast biped walking with a neuronal controller and physical computation

    Get PDF
    Biped walking remains a difficult problem and robot models can greatly {facilitate} our understanding of the underlying biomechanical principles as well as their neuronal control. The goal of this study is to specifically demonstrate that stable biped walking can be achieved by combining the physical properties of the walking robot with a small, reflex-based neuronal network, which is governed mainly by local sensor signals. This study shows that human-like gaits emerge without {specific} position or trajectory control and that the walker is able to compensate small disturbances through its own dynamical properties. The reflexive controller used here has the following characteristics, which are different from earlier approaches: (1) Control is mainly local. Hence, it uses only two signals (AEA=Anterior Extreme Angle and GC=Ground Contact) which operate at the inter-joint level. All other signals operate only at single joints. (2) Neither position control nor trajectory tracking control is used. Instead, the approximate nature of the local reflexes on each joint allows the robot mechanics itself (e.g., its passive dynamics) to contribute substantially to the overall gait trajectory computation. (3) The motor control scheme used in the local reflexes of our robot is more straightforward and has more biological plausibility than that of other robots, because the outputs of the motorneurons in our reflexive controller are directly driving the motors of the joints, rather than working as references for position or velocity control. As a consequence, the neural controller and the robot mechanics are closely coupled as a neuro-mechanical system and this study emphasises that dynamically stable biped walking gaits emerge from the coupling between neural computation and physical computation. This is demonstrated by different walking experiments using two real robot as well as by a Poincar\'{e} map analysis applied on a model of the robot in order to assess its stability. In addition, this neuronal control structure allows the use of a policy gradient reinforcement learning algorithm to tune the parameters of the neurons in real-time, during walking. This way the robot can reach a record-breaking walking speed of 3.5 leg-lengths per second after only a few minutes of online learning, which is even comparable to the fastest relative speed of human walking

    Information driven self-organization of complex robotic behaviors

    Get PDF
    Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.Comment: 29 pages, 12 figure

    Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform

    Get PDF
    Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain–body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 “Neurorobotics” of the Human Brain Project (HBP).1 At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 604102 (Human Brain Project) and from the European Unions Horizon 2020 Research and Innovation Programme under Grant Agreement No. 720270 (HBP SGA1)

    Probabilistic Models of Motor Production

    Get PDF
    N. Bernstein defined the ability of the central neural system (CNS) to control many degrees of freedom of a physical body with all its redundancy and flexibility as the main problem in motor control. He pointed at that man-made mechanisms usually have one, sometimes two degrees of freedom (DOF); when the number of DOF increases further, it becomes prohibitively hard to control them. The brain, however, seems to perform such control effortlessly. He suggested the way the brain might deal with it: when a motor skill is being acquired, the brain artificially limits the degrees of freedoms, leaving only one or two. As the skill level increases, the brain gradually "frees" the previously fixed DOF, applying control when needed and in directions which have to be corrected, eventually arriving to the control scheme where all the DOF are "free". This approach of reducing the dimensionality of motor control remains relevant even today. One the possibles solutions of the Bernstetin's problem is the hypothesis of motor primitives (MPs) - small building blocks that constitute complex movements and facilitite motor learnirng and task completion. Just like in the visual system, having a homogenious hierarchical architecture built of similar computational elements may be beneficial. Studying such a complicated object as brain, it is important to define at which level of details one works and which questions one aims to answer. David Marr suggested three levels of analysis: 1. computational, analysing which problem the system solves; 2. algorithmic, questioning which representation the system uses and which computations it performs; 3. implementational, finding how such computations are performed by neurons in the brain. In this thesis we stay at the first two levels, seeking for the basic representation of motor output. In this work we present a new model of motor primitives that comprises multiple interacting latent dynamical systems, and give it a full Bayesian treatment. Modelling within the Bayesian framework, in my opinion, must become the new standard in hypothesis testing in neuroscience. Only the Bayesian framework gives us guarantees when dealing with the inevitable plethora of hidden variables and uncertainty. The special type of coupling of dynamical systems we proposed, based on the Product of Experts, has many natural interpretations in the Bayesian framework. If the dynamical systems run in parallel, it yields Bayesian cue integration. If they are organized hierarchically due to serial coupling, we get hierarchical priors over the dynamics. If one of the dynamical systems represents sensory state, we arrive to the sensory-motor primitives. The compact representation that follows from the variational treatment allows learning of a motor primitives library. Learned separately, combined motion can be represented as a matrix of coupling values. We performed a set of experiments to compare different models of motor primitives. In a series of 2-alternative forced choice (2AFC) experiments participants were discriminating natural and synthesised movements, thus running a graphics Turing test. When available, Bayesian model score predicted the naturalness of the perceived movements. For simple movements, like walking, Bayesian model comparison and psychophysics tests indicate that one dynamical system is sufficient to describe the data. For more complex movements, like walking and waving, motion can be better represented as a set of coupled dynamical systems. We also experimentally confirmed that Bayesian treatment of model learning on motion data is superior to the simple point estimate of latent parameters. Experiments with non-periodic movements show that they do not benefit from more complex latent dynamics, despite having high kinematic complexity. By having a fully Bayesian models, we could quantitatively disentangle the influence of motion dynamics and pose on the perception of naturalness. We confirmed that rich and correct dynamics is more important than the kinematic representation. There are numerous further directions of research. In the models we devised, for multiple parts, even though the latent dynamics was factorized on a set of interacting systems, the kinematic parts were completely independent. Thus, interaction between the kinematic parts could be mediated only by the latent dynamics interactions. A more flexible model would allow a dense interaction on the kinematic level too. Another important problem relates to the representation of time in Markov chains. Discrete time Markov chains form an approximation to continuous dynamics. As time step is assumed to be fixed, we face with the problem of time step selection. Time is also not a explicit parameter in Markov chains. This also prohibits explicit optimization of time as parameter and reasoning (inference) about it. For example, in optimal control boundary conditions are usually set at exact time points, which is not an ecological scenario, where time is usually a parameter of optimization. Making time an explicit parameter in dynamics may alleviate this

    Kinematic Basis for Body Specific Locomotor Mechanics and Perturbation Responses

    Get PDF
    Animals have evolved mechanical and neural strategies for locomotion in almost every environment, overcoming the complexities of their habitats using specializations in body structure and animal behavior. These specializations are created by neural networks responsible for generating and altering muscle activation. Species specific musculoskeletal anatomy and physiology determine how locomotion is controlled through the transformation of motor patterns into body movements. Furthermore, when these species specific locomotor systems encounter perturbations during running and walking their behavioral and mechanical attributes determine how stability is established during and after the perturbation. It is still not understood how species specific structural and behavioral variables contribute to locomotion in non-uniform environments. To understand how these locomotor properties produce unique gaits and stability strategies we compared three species of brachyuran crabs during normal and perturbed running. Although all crabs ran sideways, morphological and kinematic differences explained how each species produced its unique gait and stability response. Despite the differences in running behavior and perturbation response, animals tended to use locomotor resources that were in abundance during stabilizing responses. Each crab regained stability during the perturbation response by altering leg joint movements or harnessing the body\u27s momentum. These species body designs and running behavior show how slight changes in body structure and joint kinematics can produce locomotor systems with unique mechanical profiles and abilities. Understanding how evolutionary pressures have optimized animals\u27 locomotor ability to successfully move in different environments will provide a deeper understanding of how to mimic these movements through mathematical models and robotics
    corecore