161 research outputs found

    SpaRCe : sparse reservoir computing

    Get PDF
    "Sparse" neural networks, in which relatively few neurons or connections are active, are common in both machine learning and neuroscience. Whereas in machine learning, "sparseness" is related to a penalty term which effectively leads to some connecting weights becoming small or zero, in biological brains, sparseness is often created when high spiking thresholds prevent neuronal activity. Inspired by neuroscience, here we introduce sparseness into a reservoir computing network via neuron-specific learnable thresholds of activity, allowing neurons with low thresholds to give output but silencing outputs from neurons with high thresholds. This approach, which we term "SpaRCe", optimises the sparseness level of the reservoir and applies the threshold mechanism to the information received by the read-out weights. Both the read-out weights and the thresholds are learned by a standard on-line gradient rule that minimises an error function on the outputs of the network. Threshold learning occurs by the balance of two opposing forces: reducing inter-neuronal correlations in the reservoir by deactivating redundant neurons, while increasing the activity of neurons participating in correct decisions. We test SpaRCe in a set of classification problems and find that introducing threshold learning improves performance compared to standard reservoir computing networks

    Signal neutrality, scalar property, and collapsing boundaries as consequences of a learned multi-timescale strategy

    Get PDF
    We postulate that three fundamental elements underlie a decision making process: perception of time passing, information processing in multiple timescales and reward maximisation. We build a simple reinforcement learning agent upon these principles that we train on a random dot-like task. Our results, similar to the experimental data, demonstrate three emerging signatures. (1) signal neutrality: insensitivity to the signal coherence in the interval preceding the decision. (2) Scalar property: the mean of the response times varies widely for different signal coherences, yet the shape of the distributions stays almost unchanged. (3) Collapsing boundaries: the “effective” decision-making boundary changes over time in a manner reminiscent of the theoretical optimal. Removing the perception of time or the multiple timescales from the model does not preserve the distinguishing signatures. Our results suggest an alternative explanation for signal neutrality. We propose that it is not part of motor planning. It is part of the decision-making process and emerges from information processing on multiple timescales

    A two-step immunomagnetic microbead-based method for the isolation of human primary skin telocytes/CD34+ stromal cells

    Get PDF
    Telocytes (TCs), commonly referred to as TCs/CD34+ stromal cells, are a peculiar type of interstitial cells with distinctive morphologic traits that are supposed to exert several biological functions, including tissue homeostasis regulation, cell-to-cell signaling, immune surveillance, and reparative/regenerative effects. At present, the majority of studies investigating these cells are mainly descriptive and focus only on their morphology, with a consequent paucity of functional data. To gain relevant insight into the possible functions of TCs, in vitro analyses are clearly required, but currently, the protocols for TC isolation are only at the early stages and not fully standardized. In the present in vitro study, we describe a novel methodology for the purification of human primary skin TCs through a two-step immunomagnetic microbead-based cell separation (i.e., negative selection for CD31 followed by positive selection for CD34) capable of discriminating these cells from other connective tissue-resident cells on the basis of their different immunophenotypic features. Our experiments clearly demonstrated that the proposed method allows a selective purification of cells exhibiting the peculiar TC morphology. Isolated TCs displayed very long cytoplasmic extensions with a moniliform silhouette (telopodes) and presented an immunophenotypic profile (CD31−/CD34+/PDGFRα+/vimentin+) that unequivocally differentiates them from endothelial cells (CD31+/CD34+/PDGFRα−/vimentin+) and fibroblasts (CD31−/CD34−/PDGFRα+/vimentin+). This novel methodology for the isolation of TCs lays the groundwork for further research aimed at elucidating their functional properties and possible translational applications, especially in the field of regenerative medicine

    Exploiting multiple timescales in hierarchical echo state networks

    Get PDF
    Echo state networks (ESNs) are a powerful form of reservoir computing that only require training of linear output weights while the internal reservoir is formed of fixed randomly connected neurons. With a correctly scaled connectivity matrix, the neurons’ activity exhibits the echo-state property and responds to the input dynamics with certain timescales. Tuning the timescales of the network can be necessary for treating certain tasks, and some environments require multiple timescales for an efficient representation. Here we explore the timescales in hierarchical ESNs, where the reservoir is partitioned into two smaller linked reservoirs with distinct properties. Over three different tasks (NARMA10, a reconstruction task in a volatile environment, and psMNIST), we show that by selecting the hyper-parameters of each partition such that they focus on different timescales, we achieve a significant performance improvement over a single ESN. Through a linear analysis, and under the assumption that the timescales of the first partition are much shorter than the second’s (typically corresponding to optimal operating conditions), we interpret the feedforward coupling of the partitions in terms of an effective representation of the input signal, provided by the first partition to the second, whereby the instantaneous input signal is expanded into a weighted combination of its time derivatives. Furthermore, we propose a data-driven approach to optimise the hyper-parameters through a gradient descent optimisation method that is an online approximation of backpropagation through time. We demonstrate the application of the online learning rule across all the tasks considered

    EchoVPR: Echo state networks for visual place recognition

    Get PDF
    Recognising previously visited locations is an important, but unsolved, task in autonomous navigation. Current visual place recognition (VPR) benchmarks typically challenge models to recover the position of a query image (or images) from sequential datasets that include both spatial and temporal components. Recently, Echo State Network (ESN) varieties have proven particularly powerful at solving machine learning tasks that require spatio-temporal modelling. These networks are simple, yet powerful neural architectures that - exhibiting memory over multiple time-scales and non-linear high-dimensional representations - can discover temporal relations in the data while still maintaining linearity in the learning time. In this letter, we present a series of ESNs and analyse their applicability to the VPR problem. We report that the addition of ESNs to pre-processed convolutional neural networks led to a dramatic boost in performance in comparison to non-recurrent networks in five out of six standard benchmarks (GardensPoint, SPEDTest, ESSEX3IN1, Oxford RobotCar, and Nordland), demonstrating that ESNs are able to capture the temporal structure inherent in VPR problems. Moreover, we show that models that include ESNs can outperform class-leading VPR models which also exploit the sequential dynamics of the data. Finally, our results demonstrate that ESNs improve generalisation abilities, robustness, and accuracy further supporting their suitability to VPR applications

    EchoVPR : echo state networks for visual place recognition

    Get PDF
    Recognising previously visited locations is an important, but unsolved, task in autonomous navigation. Current visual place recognition (VPR) benchmarks typically challenge models to recover the position of a query image (or images) from sequential datasets that include both spatial and temporal components. Recently, Echo State Network (ESN) varieties have proven particularly powerful at solving machine learning tasks that require spatio-temporal modelling. These networks are simple, yet powerful neural architectures that -- exhibiting memory over multiple time-scales and non-linear high-dimensional representations -- can discover temporal relations in the data while still maintaining linearity in the learning. In this paper, we present a series of ESNs and analyse their applicability to the VPR problem. We report that the addition of ESNs to pre-processed convolutional neural networks led to a dramatic boost in performance in comparison to non-recurrent networks in four standard benchmarks (GardensPoint, SPEDTest, ESSEX3IN1, Nordland) demonstrating that ESNs are able to capture the temporal structure inherent in VPR problems. Moreover, we show that ESNs can outperform class-leading VPR models which also exploit the sequential dynamics of the data. Finally, our results demonstrate that ESNs also improve generalisation abilities, robustness, and accuracy further supporting their suitability to VPR applications

    Reconfigurable reservoir computing in a magnetic metamaterial

    Get PDF
    In-materia reservoir computing (RC) leverages the intrinsic physical responses of functional materials to perform complex computational tasks. Magnetic metamaterials are exciting candidates for RC due to their huge state space, nonlinear emergent dynamics, and non-volatile memory. However, to be suitable for a broad range of tasks, the material system is required to exhibit a broad range of properties, and isolating these behaviours experimentally can often prove difficult. By using an electrically accessible device consisting of an array of interconnected magnetic nanorings- a system shown to exhibit complex emergent dynamics- here we show how reconfiguring the reservoir architecture allows exploitation of different aspects the system’s dynamical behaviours. This is evidenced through state-of-the-art performance in diverse benchmark tasks with very different computational requirements, highlighting the additional computational configurability that can be obtained by altering the input/output architecture around the material system

    A perspective on physical reservoir computing with nanomagnetic devices

    Get PDF
    Neural networks have revolutionized the area of artificial intelligence and introduced transformative applications to almost every scientific field and industry. However, this success comes at a great price; the energy requirements for training advanced models are unsustainable. One promising way to address this pressing issue is by developing low-energy neuromorphic hardware that directly supports the algorithm's requirements. The intrinsic non-volatility, non-linearity, and memory of spintronic devices make them appealing candidates for neuromorphic devices. Here, we focus on the reservoir computing paradigm, a recurrent network with a simple training algorithm suitable for computation with spintronic devices since they can provide the properties of non-linearity and memory. We review technologies and methods for developing neuromorphic spintronic devices and conclude with critical open issues to address before such devices become widely used
    • …
    corecore