7 research outputs found

    Soft-Grasping With an Anthropomorphic Robotic Hand Using Spiking Neurons

    Get PDF
    Evolution gave humans advanced grasping capabilities combining an adaptive hand with efficient control. Grasping motions can quickly be adapted if the object moves or deforms. Soft-grasping with an anthropomorphic hand is a great capability for robots interacting with objects shaped for humans. Nevertheless, most robotic applications use vacuum, 2-finger or custom made grippers. We present a biologically inspired spiking neural network (SNN) for soft-grasping to control a robotic hand. Two control loops are combined, one from motor primitives and one from a compliant controller activated by a reflex. The finger primitives represent synergies between joints and hand primitives represent different affordances. Contact is detected with a mechanism based on inter-neuron circuits in the spinal cord to trigger reflexes. A Schunk SVH 5-finger hand was used to grasp objects with different shapes, stiffness and sizes. The SNN adapted the grasping motions without knowing the exact properties of the objects. The compliant controller with online learning proved to be sensitive, allowing even the grasping of balloons. In contrast to deep learning approaches, our SNN requires one example of each grasping motion to train the primitives. Computation of the inverse kinematics or complex contact point planning is not required. This approach simplifies the control and can be used on different robots providing similar adaptive features as a human hand. A physical imitation of a biological system implemented completely with SNN and a robotic hand can provide new insights into grasping mechanisms

    Embodied Neuromorphic Vision with Continuous Random Backpropagation

    Get PDF
    Spike-based communication between biological neurons is sparse and unreliable. This enables the brain to process visual information from the eyes efficiently. Taking inspiration from biology, artificial spiking neural networks coupled with silicon retinas attempt to model these computations. Recent findings in machine learning allowed the derivation of a family of powerful synaptic plasticity rules approximating backpropagation for spiking networks. Are these rules capable of processing real-world visual sensory data? In this paper, we evaluate the performance of Event-Driven Random Back-Propagation (eRBP) at learning representations from event streams provided by a Dynamic Vision Sensor (DVS). First, we show that eRBP matches state-of-the-art performance on the DvsGesture dataset with the addition of a simple covert attention mechanism. By remapping visual receptive fields relatively to the center of the motion, this attention mechanism provides translation invariance at low computational cost compared to convolutions. Second, we successfully integrate eRBP in a real robotic setup, where a robotic arm grasps objects according to detected visual affordances. In this setup, visual information is actively sensed by a DVS mounted on a robotic head performing microsaccadic eye movements. We show that our method classifies affordances within 100ms after microsaccade onset, which is comparable to human performance reported in behavioral study. Our results suggest that advances in neuromorphic technology and plasticity rules enable the development of autonomous robots operating at high speed and low energy consumption
    corecore