61 research outputs found
Mechanisms of motor learning: by humans, for robots
Whenever we perform a movement and interact with objects in our environment, our central
nervous system (CNS) adapts and controls the redundant system of muscles actuating
our limbs to produce suitable forces and impedance for the interaction. As modern robots
are increasingly used to interact with objects, humans and other robots, they too require
to continuously adapt the interaction forces and impedance to the situation. This thesis
investigated the motor mechanisms in humans through a series of technical developments
and experiments, and utilized the result to implement biomimetic motor behaviours on
a robot. Original tools were first developed, which enabled two novel motor imaging
experiments using functional magnetic resonance imaging (fMRI). The first experiment
investigated the neural correlates of force and impedance control to understand the control
structure employed by the human brain. The second experiment developed a regressor free
technique to detect dynamic changes in brain activations during learning, and applied
this technique to investigate changes in neural activity during adaptation to force fields
and visuomotor rotations. In parallel, a psychophysical experiment investigated motor
optimization in humans in a task characterized by multiple error-effort optima. Finally
a computational model derived from some of these results was implemented to exhibit
human like control and adaptation of force, impedance and movement trajectory in a
robot
Control of master-slave actuation systems for MRI/FMRI compatible haptic interfaces
Master'sMASTER OF ENGINEERIN
Zero-Shot Object Recognition Based on Haptic Attributes
International audienceRobots operating in household environments need to recognize a variety of objects. Several touch-based object recognition systems have been proposed in the last few years [2]– [5]. They map haptic data to object classes using machine learning techniques, and then use the learned mapping to recognize one of the previously encountered objects. The accuracy of these proposed methods depends on the mass of the the training samples available for each object class. On the other hand, haptic data collection is often system (robot) specific and labour intensive. One way to cope with this problem is to use a knowledge transfer based system, that can exploit object relationships to share learned models between objects. However, while knowledge-based systems, such as zero shot learning [6], have been regularly proposed for visual object recognition, a similar system is not available for haptic recognition. Here we developed [1] the first haptic zero-shot learning system that enables a robot to recognize, using haptic exploration alone, objects that it encounters for the first time. Our system first uses the so called Direct Attributes Prediction (DAP) model [7] to train on the semantic representation of objects based on a list of haptic attributes, rather than the object itself. The attributes (including physical properties such as shape, texture, material) constitute an intermediate layer relating objects, and is used for knowledge transfer. Using this layering, our system can predict the attribute-based representation of a new (previously non-trained) object and use it to infer its identity. A. System Overview An overview of our system is given in Fig. 1. Given distinct training and test data-sets Y and Z, that are described by an attribute basis a, we first associate a binary label a o m to each object o with o ∈ Y ∪ Z and m = 1. .. M. This results in a binary object-attribute matrix K. For a given attributes list during training, haptic data collected from Y are used to train a binary classifier for each attribute a m. Finally, to classify a test sample x as one of Z objects, x is introduced to each one of the learned attribute classifiers and the output attributes posteriors p(a m | x) are used to predict the corresponding object, provided that the ground truth is available in K. This extended abstract is a summary of submission [1] B. Experimental Setup To collect haptic data, we use the Shadow anthropo-morphic robotic hand equipped with a BioTac multimodal tactile sensor on each fingertip. We developed a force-based grasp controller that enables the hand to enclose an object. The joint encoder readings provides us with information on object shape, while the BioTac sensors provides us with information about objects material, texture and compliance at each fingertip 1. In order to find the appropriate list of attributes describing our object set (illustrated in Fig. 2), we used online dictionaries to collect one or multiple textual definitions of each object. From this data, we extracted 11 haptic adjectives, or descriptions that could be " felt " using our robot hand. These adjectives served as our attributes: made of porcelain, made of plastic, made of glass, made of cardboard, made of stainless steel, cylindrical, round, rectangular, concave, has a handle, has a narrow part. We grouped the attributes into material attributes, and shape attributes. During the training phase, we use the Shadow hand joint readings x sh to train an SVM classifier for each shape, and BioTacs readings x b to train an SVM classifier for each material attribute. SVM training returns a distance s m (x) measure for each sample x that gives how far x lies from the discriminant hyper-plane. We transform this score to an attribute posterior p(a m | x) using a sigmoid function
Artificial proprioceptive feedback for myoelectric control
The typical control of myoelectric interfaces, whether in laboratory settings or real-life prosthetic applications, largely relies on visual feedback because proprioceptive signals from the controlling muscles are either not available or very noisy. We conducted a set of experiments to test whether artificial proprioceptive feedback, delivered non-invasively to another limb, can improve control of a two-dimensional myoelectrically-controlled computer interface. In these experiments, participants’ were required to reach a target with a visual cursor that was controlled by electromyogram signals recorded from muscles of the left hand, while they were provided with an additional proprioceptive feedback on their right arm by moving it with a robotic manipulandum. Provision of additional artificial proprioceptive feedback improved the angular accuracy of their movements when compared to using visual feedback alone but did not increase the overall accuracy quantified with the average distance between the cursor and the target. The advantages conferred by proprioception were present only when the proprioceptive feedback had similar orientation to the visual feedback in the task space and not when it was mirrored, demonstrating the importance of congruency in feedback modalities for multi-sensory integration. Our results reveal the ability of the human motor system to learn new inter-limb sensory-motor associations; the motor system can utilize task-related sensory feedback, even when it is available on a limb distinct from the one being actuated. In addition, the proposed task structure provides a flexible test paradigm by which the effectiveness of various sensory feedback and multi-sensory integration for myoelectric prosthesis control can be evaluated
Dissociable Learning Processes Underlie Human Pain Conditioning.
Pavlovian conditioning underlies many aspects of pain behavior, including fear and threat detection [1], escape and avoidance learning [2], and endogenous analgesia [3]. Although a central role for the amygdala is well established [4], both human and animal studies implicate other brain regions in learning, notably ventral striatum and cerebellum [5]. It remains unclear whether these regions make different contributions to a single aversive learning process or represent independent learning mechanisms that interact to generate the expression of pain-related behavior. We designed a human parallel aversive conditioning paradigm in which different Pavlovian visual cues probabilistically predicted thermal pain primarily to either the left or right arm and studied the acquisition of conditioned Pavlovian responses using combined physiological recordings and fMRI. Using computational modeling based on reinforcement learning theory, we found that conditioning involves two distinct types of learning process. First, a non-specific "preparatory" system learns aversive facial expressions and autonomic responses such as skin conductance. The associated learning signals-the learned associability and prediction error-were correlated with fMRI brain responses in amygdala-striatal regions, corresponding to the classic aversive (fear) learning circuit. Second, a specific lateralized system learns "consummatory" limb-withdrawal responses, detectable with electromyography of the arm to which pain is predicted. Its related learned associability was correlated with responses in ipsilateral cerebellar cortex, suggesting a novel computational role for the cerebellum in pain. In conclusion, our results show that the overall phenotype of conditioned pain behavior depends on two dissociable reinforcement learning circuits.Research was supported by National Institute for Information and Communications Technology (Japan), the Japanese Society for the Promotion of Science (JSPS) and The Wellcome Trust (UK). S.Z. was supported by the WD Armstrong Fund and the Cambridge Trust. G.G. was partially supported by the Kakenhi Research Grant B #13380602 from the Japan Society for the Promotion of Science. We thank the imaging team at the Center for Information and Neural Networks for their help in performing the study. The authors declare that there are no conflicts of interest.This is the final version of the article. It was first available from Elsevier via http://dx.doi.org/10.1016/j.cub.2015.10.06
A versatile biomimetic controller for contact tooling and haptic exploration
International audienceThis article presents a versatile controller that enables various contact tooling tasks with minimal prior knowledge of the tooled surface. The controller is derived from results of neuroscience studies that investigated the neural mechanisms utilized by humans to control and learn complex interactions with the environment. We demonstrate here the versatility of this controller in simulations of cutting, drilling and surface exploration tasks, which would normally require different control paradigms. We also present results on the exploration of an unknown surface with a 7-DOF manipulator, where the robot builds a 3D surface map of the surface profile and texture while applying constant force during motion. Our controller provides a unified control framework encompassing behaviors expected from the different specialized control paradigms like position control, force control and impedance control
Force, impedance and trajectory learning for contact tooling and haptic identification
Humans can skilfully use tools and interact with the environment by adapting their movement trajectory, contact force, and impedance. Motivated by the human versatility, we develop here a robot controller that concurrently adapts feedforward force, impedance, and reference trajectory when interacting with an unknown environment. In particular, the robot's reference trajectory is adapted to limit the interaction force and maintain it at a desired level, while feedforward force and impedance adaptation compensates for the interaction with the environment. An analysis of the interaction dynamics using Lyapunov theory yields the conditions for convergence of the closed-loop interaction mediated by this controller. Simulations exhibit adaptive properties similar to human motor adaptation. The implementation of this controller for typical interaction tasks including drilling, cutting, and haptic exploration shows that this controller can outperform conventional controllers in contact tooling
- …