21 research outputs found

    Differences between kinematic synergies and muscle synergies during two-digit grasping

    Get PDF
    International audienceThe large number of mechanical degrees of freedom of the hand is not fully exploited during actual movements such as grasping. Usually, angular movements in various joints tend to be coupled, and EMG activities in different hand muscles tend to be correlated. The occurrence of covariation in the former was termed kinematic synergies, in the latter muscle synergies. This study addresses two questions: (i) Whether kinematic and muscle synergies can simultaneously accommodate for kinematic and kinetic constraints. (ii) If so, whether there is an interrelation between kinematic and muscle synergies. We used a reach-grasp-and-pull paradigm and recorded the hand kinematics as well as eight surface EMGs. Subjects had to either perform a precision grip or side grip and had to modify their grip force in order to displace an object against a low or high load. The analysis was subdivided into three epochs: reach, grasp-and-pull, and static hold. Principal component analysis (PCA, temporal or static) was performed separately for all three epochs, in the kinematic and in the EMG domain. PCA revealed that (i) Kinematic-and muscle-synergies can simultaneously accommodate kinematic (grip type) and kinetic task constraints (load condition). (ii) Upcoming grip and load conditions of the grasp are represented in kinematic-and muscle-synergies already during reach. Phase plane plots of the principal muscle-synergy against the principal kinematic synergy revealed (iii) that the muscle-synergy is linked (correlated, and in phase advance) to the kinematic synergy during reach and during grasp-and-pull. Furthermore (iv), pair-wise correlations of EMGs during hold suggest that muscle-synergies are (in part) implemented by coactivation of muscles through common input. Together, these results suggest that kinematic synergies have (at least in part) their origin not just in muscular activation, but in synergistic muscle activation. In short: kinematic synergies may result from muscle synergies

    Mutual influence of firing rates of corticomotoneuronal (CM) cells for learning a precision grip task

    Get PDF
    International audienceAs a part of a Brain-Machine Interface, we define a model for learning and forecasting muscular activity, given sparse cortical activity in the form of action potential signals (spike trains). Whereas very impressive results such as [1] exist where a reaching task is successively performed from the sole interpretation of cortical signals, we focus our efforts in formalizing how neural impulses can be transcribed into a flexion of the index finger. We have a collection of experiments in which a trained monkey (macaca nemestrina) performs a precision grip. Its neuronal activity is partially recorded as the monkey clasps two levers between its index finger and thumb. In these experiments, 33 corticomotoneuronal (CM) cells from the hand area of the motor cortex (area 4) were recorded with glass-insulated platinum-iridium micro-electrodes, refer to [2] for more details about retrieving and filtering the data in our particular experiments. The main objective of this work is to treat the data in a way that allows us to provide an effective input/output functional. The underlying model parameters being interpreted with respect to the physiological aspects, though the model itself is not a bio-physical one. The method used here is based on a system of first degree linear equations involving the firing rate of the recorded neurons, two sets of thresholds associated to them, and the variation of the global neuronal activity. The learning formula is validated over a training set and tested over an estimation set

    Integration of Gravitational Torques in Cerebellar Pathways Allows for the Dynamic Inverse Computation of Vertical Pointing Movements of a Robot Arm

    Get PDF
    Several authors suggested that gravitational forces are centrally represented in the brain for planning, control and sensorimotor predictions of movements. Furthermore, some studies proposed that the cerebellum computes the inverse dynamics (internal inverse model) whereas others suggested that it computes sensorimotor predictions (internal forward model).This study proposes a model of cerebellar pathways deduced from both biological and physical constraints. The model learns the dynamic inverse computation of the effect of gravitational torques from its sensorimotor predictions without calculating an explicit inverse computation. By using supervised learning, this model learns to control an anthropomorphic robot arm actuated by two antagonists McKibben artificial muscles. This was achieved by using internal parallel feedback loops containing neural networks which anticipate the sensorimotor consequences of the neural commands. The artificial neural networks architecture was similar to the large-scale connectivity of the cerebellar cortex. Movements in the sagittal plane were performed during three sessions combining different initial positions, amplitudes and directions of movements to vary the effects of the gravitational torques applied to the robotic arm. The results show that this model acquired an internal representation of the gravitational effects during vertical arm pointing movements.This is consistent with the proposal that the cerebellar cortex contains an internal representation of gravitational torques which is encoded through a learning process. Furthermore, this model suggests that the cerebellum performs the inverse dynamics computation based on sensorimotor predictions. This highlights the importance of sensorimotor predictions of gravitational torques acting on upper limb movements performed in the gravitational field

    Brain Machine Interface (BMI) as a tool for understanding human-machine cooperation

    No full text
    International audienceEver since the appearance of homo-sapiens, machines have served humans as a "brain nature interface" (BNI) - a means of interacting with nature, including humans and other living beings. The ability to use and manufacture machines has long been taken as proof of human intelligence: 18th century machines and automates and 20th century robots have had a tremendous impact in this sense. An entirely new milestone was achieved with the emergence of artificial intelligence (AI) and its promise of revolutionary change: a machine with an artificial brain. Even though this last point has so far remained a dream, AI and its powerful applications have given machines certain learning capacities, provoking discussions in many scientific disciplines. The BMI (Brain Machine Interface), a machine interacting with a biological brain, is another turning point that raises new ethical, philosophical and medical questions and problems. How is the new device and its purpose represented in the brain? What is the limit of brain plasticity induced by a BMI? Which brain areas can and should be sub-served by a BMI? Should BMI tools be considered as therapeutic or enhancing devices, or both? Indeed, BMI-controlled prostheses are not like any other tool: when they affect bodily skills, they strongly interfere with human life and its salient points, such as the opposition/juxtaposition between the characteristics of living beings and technological devices. This chapter will present the state of the art of BMIs and their applications, and will then propose a multidisciplinary framework to discuss the related issues, questions and outcomes by gathering diverse philosophical and engineering points of views. Issues and paradigms to be discussed concern the training time needed to acquire the use of a BMI and its prosthesis, the modifications of the human body and its abilities, the impact of a new modular bodily scheme, the tension between autonomy due to new capacities and dependency on maintenance, the risks of non-egalitarian positions concerning accessibility and social competition, as well as the risks of considering humans as a technological product or as a device

    Reach and grasp for an anthropomorphic robotic system based on sensorimotor learning

    No full text
    In this article, we present a neurobiologically inspired multinetwork architecture based on knowledge of cortico-cortical connectivity and its application on an anthropomorphic head-arm-hand robotic system to provide reach-and-grasp kinematics based on multimodal sensorimotor learning. The system incorporates artificial neural network modules (matching units) trained by the locally weighted projection regression (LWPR) algorithm that enables progressive learning from simple to more complex sensorimotor tasks. We report the actual performance of the system by comparing the simulation with the experimental results obtained by the implementation on the real world artefact.</p

    Pointing errors for simulation 2.

    No full text
    <p>Average RMSE<sub>D</sub> (D) and RMSE<sub>S</sub> (S) for each mass condition for the session I (SI), II (SII) and III (SIII). Training: Training set. Iep (inter- and extrapolated positions): test set. M<sub>i</sub>_T (0≤i≤5): masses used during the training set. M<sub>i</sub>_Iep (0≤i≤5): masses used during the test set. Average Iep: RMSE values for the test set averaged across SI, SII and SIII. Aver M0–5_T, Aver M0–5_T; Aver M0–5_Iep: RMSE<sub>D</sub> and RMSE<sub>S</sub> values for the training (_T) and test set (_Iep) averaged across the different mass conditions, respectively.</p
    corecore