1,081 research outputs found

    Respiratory, postural and spatio-kinetic motor stabilization, internal models, top-down timed motor coordination and expanded cerebello-cerebral circuitry: a review

    Get PDF
    Human dexterity, bipedality, and song/speech vocalization in Homo are reviewed within a motor evolution perspective in regard to 

(i) brain expansion in cerebello-cerebral circuitry, 
(ii) enhanced predictive internal modeling of body kinematics, body kinetics and action organization, 
(iii) motor mastery due to prolonged practice, 
(iv) task-determined top-down, and accurately timed feedforward motor adjustment of multiple-body/artifact elements, and 
(v) reduction in automatic preflex/spinal reflex mechanisms that would otherwise restrict such top-down processes. 

Dual-task interference and developmental neuroimaging research argues that such internal modeling based motor capabilities are concomitant with the evolution of 
(vi) enhanced attentional, executive function and other high-level cognitive processes, and that 
(vii) these provide dexterity, bipedality and vocalization with effector nonspecific neural resources. 

The possibility is also raised that such neural resources could 
(viii) underlie human internal model based nonmotor cognitions. 
&#xa

    Evolutionary robotics and neuroscience

    Get PDF
    No description supplie

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    Dynamic Analysis of Recurrent Neural Networks

    Get PDF
    With the advancement in deep learning research, neural networks have become one of the most powerful tools for artificial intelligence tasks. More specifically, recurrent neural networks (RNNs) have achieved state-of-the-art in tasks such as hand-writing recognition and speech recognition. Despite the success of recurrent neural networks, how and why do neural nets work is still not sufficiently investigated. My work on the dynamical analysis of recurrent neural networks can help understand how the input features are extracted in the recurrent layer, how the RNNs make decisions, and how the chaotic dynamics of RNNs affects its behaviors. Firstly, I investigated the dynamics of recurrent neural networks as autonomous dynamical system in the experiment of a two-joint limb controlling task and compared the empirical result and the theoretical analysis. Secondly, I investigated the dynamics of non-autonomous recurrent neural networks on two benchmark tasks: sequential MNIST recognition task and DNA splice junction classification task. How the hidden states of long-short term memory (LSTM) and gated recurrent unit (GRU) cells learn new features and how the input sequence is extracted are demonstrated with experiments. Finally, based on the understanding of the external and internal dynamics of recurrent units, I proposed several algorithms for recurrent neural network compression. The algorithms demonstrate reasonable performance in compression ratio and are able to sustain the performance of the original models

    Exploiting Multimodal Information in Deep Learning

    Get PDF
    Humans are good at using multimodal information to perceive and to interact with the world. Such information includes visual, auditory, kinesthetic, etc. Despite the advancement in deep learning using single modality in the past decade, there are relatively fewer works focused on multimodal learning. Even with existing multimodal deep learning works, most of them focus on a small number of modalities. This dissertation will investigate various distinct forms of multi-modal learning: multiple visual modalities as input, audio-visual multimodal input, and visual and proprioceptive (kinesthetic) multimodal input. Specifically, in the first project we investigate synthesizing light fields from a single image and estimated depth. In the second project, we investigate face recognition for unconstrained videos with audio-visual multimodal inputs. Finally, we investigate learning to construct and use tools with visual, proprioceptive and kinesthetic multimodal inputs. In the first task, we investigate synthesizing light fields with a single RGB image and its estimated depth. Synthesizing novel views (light fields) from a single image is very challenging since the depth information is lost, and depth information is crucial for view synthesis. We propose to use a pre-trained model to estimate the depth, and then fuse the depth information together with the RGB image to generate the light fields. Our experiments showed that multimodal input (RGB image and depth) significantly improved the performance over the single image input. In the second task, we focus on learning face recognition for low quality videos. For low quality videos such as low-resolution online videos and surveillance videos, recognizing faces based on video frames alone is very challenging. We propose to use audio information in the video clip to aid in the face recognition task. To achieve this goal, we propose Audio-Visual Aggregation Network (AVAN) to aggregate audio features and visual features using an attention mechanism. Empirical results show that our approach using both visual and audio information significantly improves the face recognition accuracy on unconstrained videos. Finally, in the third task, we propose to use visual, proprioceptive and kinesthetic inputs to learn to construct and use tools. The use of tools in animals indicates high levels of cognitive capability, and, aside from humans, it is observed only in a small number of higher mammals and avian species, and constructing novel tools is an even more challenging task. Learning this task with only visual input is challenging, therefore, we propose to use visual and proprioceptive (kinesthetic) inputs to accelerate the learning. We build a physically simulated environment for tool construction task. We also introduce a hierarchical reinforcement learning approach to learn to construct tools and reach the target, without any prior knowledge. The main contribution of this dissertation is in the investigation of multiple scenarios where multimodal processing leads to enhanced performance. We expect the specific methods developed in this work, such as the extraction of hidden modalities (depth), use of attention, and hierarchical rewards, to help us better understand multimodal processing in deep learning

    On the intrinsic control properties of muscle and relexes: exploring the interaction between neural and musculoskeletal dynamics in the framework of the equilbrium-point hypothesis

    Get PDF
    The aim of this thesis is to examine the relationship between the intrinsic dynamics of the body and its neural control. Specifically, it investigates the influence of musculoskeletal properties on the control signals needed for simple goal-directed movements in the framework of the equilibriumpoint (EP) hypothesis. To this end, muscle models of varying complexity are studied in isolation and when coupled to feedback laws derived from the EP hypothesis. It is demonstrated that the dynamical landscape formed by non-linear musculoskeletal models features a stable attractor in joint space whose properties, such as position, stiffness and viscosity, can be controlled through differential- and co-activation of antagonistic muscles. The emergence of this attractor creates a new level of control that reduces the system’s degrees of freedom and thus constitutes a low-level motor synergy. It is described how the properties of this stable equilibrium, as well as transient movement dynamics, depend on the various modelling assumptions underlying the muscle model. The EP hypothesis is then tested on a chosen musculoskeletal model by using an optimal feedback control approach: genetic algorithm optimisation is used to identify feedback gains that produce smooth single- and multijoint movements of varying amplitude and duration. The importance of different feedback components is studied for reproducing invariants observed in natural movement kinematics. The resulting controllers are demonstrated to cope with a plausible range of reflex delays, predict the use of velocity-error feedback for the fastest movements, and suggest that experimentally observed triphasic muscle bursts are an emergent feature rather than centrally planned. Also, control schemes which allow for simultaneous control of movement duration and distance are identified. Lastly, it is shown that the generic formulation of the EP hypothesis fails to account for the interaction torques arising in multijoint movements. Extensions are proposed which address this shortcoming while maintaining its two basic assumptions: control signals in positional rather than force-based frames of reference; and the primacy of control properties intrinsic to the body over internal models. It is concluded that the EP hypothesis cannot be rejected for single- or multijoint reaching movements based on claims that predicted movement kinematics are unrealistic

    Neural Control and Biomechanics of the Octopus Arm Muscular Hydrostat

    Get PDF
    openOctopus vulgaris is a cephalopod mollusk with outstanding motor capabilities, built upon the action of eight soft and exceptionally flexible appendages. In the absence of any rigid skeletal-like support, the octopus arm works as a “muscular hydrostat” and movement is generated from the antagonistic action of two main muscle groups (longitudinal, L, and transverse, T, muscles) under an isovolumetric constrain. This peculiar anatomical organization evolved along with novel morphological arrangements, biomechanical properties, and motor control strategies aimed at reducing the computational burden of controlling unconstrained appendages endowed with virtually infinite degrees of freedom of motion. Hence, the octopus offers the unique opportunity to study a motor system, different from those of skeletal animals, and capable of controlling complex and precise motor tasks of eight arms with theoretically infinite degrees of freedom. Here, we investigated the octopus arm motor system employing a bottom-up approach. We began by identifying the motor neuron population and characterizing their organization in the arm nervous system. We next performed an extensive biomechanical characterization of the arm muscles focusing on the morphofunctional properties that are likely to facilitate the dynamic deformations occurring during arm movement. We show that motor neurons cluster in specific regions of the arm ganglia following a topographical organization. In addition, T muscles exhibit biomechanical properties resembling those of vertebrate slow muscles whereas L muscles are closer to those of vertebrate fast muscles. This difference is enhanced by the hydrostatic pressure inherently present in the arm, which causes the two muscles to operate under different conditions. Interestingly, these features underlie the different use of arm muscles during specific tasks Thus, the octopus evolved several arm-embedded adaptations to reduce the motor control complexity and increase the energetic efficiency of arm motion. This study find relevance also in the blooming field of soft-robotics. Indeed, an increasing number of researchers are currently aiming to design and construct bio-inspired soft-robotic manipulators, more flexible and versatile than their “hard” counterparts and more suited to perform gentle tasks and to interact with biological tissues. In this context, the octopus emerged as a pivotal source of inspiration for motor control principles underlying motion in soft-bodied limbs.openXXXIV CICLO - NEUROSCIENZE - Neuroscienze e neurotecnologieDI CLEMENTE, Alessi

    Genetically evolved dynamic control for quadruped walking

    Get PDF
    The aim of this dissertation is to show that dynamic control of quadruped locomotion is achievable through the use of genetically evolved central pattern generators. This strategy is tested both in simulation and on a walking robot. The design of the walker has been chosen to be statically unstable, so that during motion less than three supporting feet may be in contact with the ground. The control strategy adopted is capable of propelling the artificial walker at a forward locomotion speed of ~1.5 Km/h on rugged terrain and provides for stability of motion. The learning of walking, based on simulated genetic evolution, is carried out in simulation to speed up the process and reduce the amount of damage to the hardware of the walking robot. For this reason a general-purpose fast dynamic simulator has been developed, able to efficiently compute the forward dynamics of tree-like robotic mechanisms. An optimization process to select stable walking patterns is implemented through a purposely designed genetic algorithm, which implements stochastic mutation and cross-over operators. The algorithm has been tailored to address the high cost of evaluation of the optimization function, as well as the characteristics of the parameter space chosen to represent controllers. Experiments carried out on different conditions give clear indications on the potential of the approach adopted. A proof of concept is achieved, that stable dynamic walking can be obtained through a search process which identifies attractors in the dynamics of the motor-control system of an artificial walker
    corecore