7 research outputs found

    Generating pointing motions for a humanoid robot by combining motor primitives

    Get PDF
    The human motor system is robust, adaptive and very flexible. The underlying principles of human motion provide inspiration for robotics. Pointing at different targets is a common robotics task, where insights about human motion can be applied. Traditionally in robotics, when a motion is generated it has to be validated so that the robot configurations involved are appropriate. The human brain, in contrast, uses the motor cortex to generate new motions reusing and combining existing knowledge before executing the motion. We propose a method to generate and control pointing motions for a robot using a biological inspired architecture implemented with spiking neural networks. We outline a simplified model of the human motor cortex that generates motions using motor primitives. The network learns a base motor primitive for pointing at a target in the center, and four correction primitives to point at targets up, down, left and right from the base primitive, respectively. The primitives are combined to reach different targets. We evaluate the performance of the network with a humanoid robot pointing at different targets marked on a plane. The network was able to combine one, two or three motor primitives at the same time to control the robot in real-time to reach a specific target. We work on extending this work from pointing to a given target to performing a grasping or tool manipulation task. This has many applications for engineering and industry involving real robots

    A spiking network classifies human sEMG signals and triggers finger reflexes on a robotic hand

    Get PDF
    The interaction between robots and humans is of great relevance for the field of neurorobotics as it can provide insights on how humans perform motor control and sensor processing and on how it can be applied to robotics. We propose a spiking neural network (SNN) to trigger finger motion reflexes on a robotic hand based on human surface Electromyography (sEMG) data. The first part of the network takes sEMG signals to measure muscle activity, then classify the data to detect which finger is being flexed in the human hand. The second part triggers single finger reflexes on the robot using the classification output. The finger reflexes are modeled with motion primitives activated with an oscillator and mapped to the robot kinematic. We evaluated the SNN by having users wear a non-invasive sEMG sensor, record a training dataset, and then flex different fingers, one at a time. The muscle activity was recorded using a Myo sensor with eight different channels. The sEMG signals were successfully encoded into spikes as input for the SNN. The classification could detect the active finger and trigger the motion generation of finger reflexes. The SNN was able to control a real Schunk SVH 5-finger robotic hand online. Being able to map myo-electric activity to functions of motor control for a task, can provide an interesting interface for robotic applications, and a platform to study brain functioning. SNN provide a challenging but interesting framework to interact with human data. In future work the approach will be extended to control also a robot arm at the same time

    Reinforcement learning control of a biomechanical model of the upper extremity

    Get PDF
    Among the infinite number of possible movements that can be produced, humans are commonly assumed to choose those that optimize criteria such as minimizing movement time, subject to certain movement constraints like signal-dependent and constant motor noise. While so far these assumptions have only been evaluated for simplified point-mass or planar models, we address the question of whether they can predict reaching movements in a full skeletal model of the human upper extremity. We learn a control policy using a motor babbling approach as implemented in reinforcement learning, using aimed movements of the tip of the right index finger towards randomly placed 3D targets of varying size. We use a state-of-the-art biomechanical model, which includes seven actuated degrees of freedom. To deal with the curse of dimensionality, we use a simplified second-order muscle model, acting at each degree of freedom instead of individual muscles. The results confirm that the assumptions of signal-dependent and constant motor noise, together with the objective of movement time minimization, are sufficient for a state-of-the-art skeletal model of the human upper extremity to reproduce complex phenomena of human movement, in particular Fitts' Law and the 2/3 Power Law. This result supports the notion that control of the complex human biomechanical system can plausibly be determined by a set of simple assumptions and can easily be learned.Comment: 19 pages, 7 figure
    corecore