1,037 research outputs found

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. André van Schaik and Bernabé Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Spatio-temporal learning with the online finite and infinite echo-state Gaussian processes

    Get PDF
    Successful biological systems adapt to change. In this paper, we are principally concerned with adaptive systems that operate in environments where data arrives sequentially and is multivariate in nature, for example, sensory streams in robotic systems. We contribute two reservoir inspired methods: 1) the online echostate Gaussian process (OESGP) and 2) its infinite variant, the online infinite echostate Gaussian process (OIESGP) Both algorithms are iterative fixed-budget methods that learn from noisy time series. In particular, the OESGP combines the echo-state network with Bayesian online learning for Gaussian processes. Extending this to infinite reservoirs yields the OIESGP, which uses a novel recursive kernel with automatic relevance determination that enables spatial and temporal feature weighting. When fused with stochastic natural gradient descent, the kernel hyperparameters are iteratively adapted to better model the target system. Furthermore, insights into the underlying system can be gleamed from inspection of the resulting hyperparameters. Experiments on noisy benchmark problems (one-step prediction and system identification) demonstrate that our methods yield high accuracies relative to state-of-the-art methods, and standard kernels with sliding windows, particularly on problems with irrelevant dimensions. In addition, we describe two case studies in robotic learning-by-demonstration involving the Nao humanoid robot and the Assistive Robot Transport for Youngsters (ARTY) smart wheelchair

    Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning

    Get PDF
    There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered.This research was funded by the University of Málaga, the Ministerio de Ciencia, Innovación y Universidades, Gobierno de España, grant number RTI2018-093421-B-I00 and the European Commission, grant number BES-2016-078237. Partial funding for open access charge: Universidad de Málag

    Fast and robust learning by reinforcement signals: explorations in the insect brain

    Get PDF
    We propose a model for pattern recognition in the insect brain. Departing from a well-known body of knowledge about the insect brain, we investigate which of the potentially present features may be useful to learn input patterns rapidly and in a stable manner. The plasticity underlying pattern recognition is situated in the insect mushroom bodies and requires an error signal to associate the stimulus with a proper response. As a proof of concept, we used our model insect brain to classify the well-known MNIST database of handwritten digits, a popular benchmark for classifiers. We show that the structural organization of the insect brain appears to be suitable for both fast learning of new stimuli and reasonable performance in stationary conditions. Furthermore, it is extremely robust to damage to the brain structures involved in sensory processing. Finally, we suggest that spatiotemporal dynamics can improve the level of confidence in a classification decision. The proposed approach allows testing the effect of hypothesized mechanisms rather than speculating on their benefit for system performance or confidence in its responses

    Information theoretic approach to tactile encoding and discrimination

    Get PDF
    The human sense of touch integrates feedback from a multitude of touch receptors, but how this information is represented in the neural responses such that it can be extracted quickly and reliably is still largely an open question. At the same time, dexterous robots equipped with touch sensors are becoming more common, necessitating better methods for representing sequentially updated information and new control strategies that aid in extracting relevant features for object manipulation from the data. This thesis uses information theoretic methods for two main aims: First, the neural code for tactile processing in humans is analyzed with respect to how much information is transmitted about tactile features. Second, machine learning approaches are used in order to influence both what data is gathered by a robot and how it is represented by maximizing information theoretic quantities. The first part of this thesis contains an information theoretic analysis of data recorded from primary tactile neurons in the human peripheral somatosensory system. We examine the differences in information content of two coding schemes, namely spike timing and spike counts, along with their spatial and temporal characteristics. It is found that estimates of the neurons’ information content based on the precise timing of spikes are considerably larger than for spikes counts. Moreover, the information estimated based on the timing of the very first elicited spike is at least as high as that provided by spike counts, but in many cases considerably higher. This suggests that first spike latencies can serve as a powerful mechanism to transmit information quickly. However, in natural object manipulation tasks, different tactile impressions follow each other quickly, so we asked whether the hysteretic properties of the human fingertip affect neural responses and information transmission. We find that past stimuli affect both the precise timing of spikes and spike counts of peripheral tactile neurons, resulting in increased neural noise and decreased information about ongoing stimuli. Interestingly, the first spike latencies of a subset of afferents convey information primarily about past stimulation, hinting at a mechanism to resolve ambiguity resulting from mechanical skin properties. The second part of this thesis focuses on using machine learning approaches in a robotics context in order to influence both what data is gathered and how it is represented by maximizing information theoretic quantities. During robotic object manipulation, often not all relevant object features are known, but have to be acquired from sensor data. Touch is an inherently active process and the question arises of how to best control the robot’s movements so as to maximize incoming information about the features of interest. To this end, we develop a framework that uses active learning to help with the sequential gathering of data samples by finding highly informative actions. The viability of this approach is demonstrated on a robotic hand-arm setup, where the task involves shaking bottles of different liquids in order to determine the liquid’s viscosity from tactile feedback only. The shaking frequency and the rotation angle of shaking are optimized online. Additionally, we consider the problem of how to better represent complex probability distributions that are sequentially updated, as approaches for minimizing uncertainty depend on an accurate representation of that uncertainty. A mixture of Gaussians representation is proposed and optimized using a deterministic sampling approach. We show how our method improves on similar approaches and demonstrate its usefulness in active learning scenarios. The results presented in this thesis highlight how information theory can provide a principled approach for both investigating how much information is contained in sensory data and suggesting ways for optimization, either by using better representations or actively influencing the environment

    A NEUROMORPHIC APPROACH TO TACTILE PERCEPTION

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore