428 research outputs found

    Conversion of Artificial Recurrent Neural Networks to Spiking Neural Networks for Low-power Neuromorphic Hardware

    Full text link
    In recent years the field of neuromorphic low-power systems that consume orders of magnitude less power gained significant momentum. However, their wider use is still hindered by the lack of algorithms that can harness the strengths of such architectures. While neuromorphic adaptations of representation learning algorithms are now emerging, efficient processing of temporal sequences or variable length-inputs remain difficult. Recurrent neural networks (RNN) are widely used in machine learning to solve a variety of sequence learning tasks. In this work we present a train-and-constrain methodology that enables the mapping of machine learned (Elman) RNNs on a substrate of spiking neurons, while being compatible with the capabilities of current and near-future neuromorphic systems. This "train-and-constrain" method consists of first training RNNs using backpropagation through time, then discretizing the weights and finally converting them to spiking RNNs by matching the responses of artificial neurons with those of the spiking neurons. We demonstrate our approach by mapping a natural language processing task (question classification), where we demonstrate the entire mapping process of the recurrent layer of the network on IBM's Neurosynaptic System "TrueNorth", a spike-based digital neuromorphic hardware architecture. TrueNorth imposes specific constraints on connectivity, neural and synaptic parameters. To satisfy these constraints, it was necessary to discretize the synaptic weights and neural activities to 16 levels, and to limit fan-in to 64 inputs. We find that short synaptic delays are sufficient to implement the dynamical (temporal) aspect of the RNN in the question classification task. The hardware-constrained model achieved 74% accuracy in question classification while using less than 0.025% of the cores on one TrueNorth chip, resulting in an estimated power consumption of ~17 uW

    Stochastic Synapses Enable Efficient Brain-Inspired Learning Machines

    Get PDF
    Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines, a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random mask over the connections. Synaptic stochasticity plays the dual role of an efficient mechanism for sampling, and a regularizer during learning akin to DropConnect. A local synaptic plasticity rule implementing an event-driven form of contrastive divergence enables the learning of generative models in an on-line fashion. Synaptic sampling machines perform equally well using discrete-timed artificial units (as in Hopfield networks) or continuous-timed leaky integrate & fire neurons. The learned representations are remarkably sparse and robust to reductions in bit precision and synapse pruning: removal of more than 75% of the weakest connections followed by cursory re-learning causes a negligible performance loss on benchmark classification tasks. The spiking neuron-based synaptic sampling machines outperform existing spike-based unsupervised learners, while potentially offering substantial advantages in terms of power and complexity, and are thus promising models for on-line learning in brain-inspired hardware

    Forward Table-Based Presynaptic Event-Triggered Spike-Timing-Dependent Plasticity

    Full text link
    Spike-timing-dependent plasticity (STDP) incurs both causal and acausal synaptic weight updates, for negative and positive time differences between pre-synaptic and post-synaptic spike events. For realizing such updates in neuromorphic hardware, current implementations either require forward and reverse lookup access to the synaptic connectivity table, or rely on memory-intensive architectures such as crossbar arrays. We present a novel method for realizing both causal and acausal weight updates using only forward lookup access of the synaptic connectivity table, permitting memory-efficient implementation. A simplified implementation in FPGA, using a single timer variable for each neuron, closely approximates exact STDP cumulative weight updates for neuron refractory periods greater than 10 ms, and reduces to exact STDP for refractory periods greater than the STDP time window. Compared to conventional crossbar implementation, the forward table-based implementation leads to substantial memory savings for sparsely connected networks supporting scalable neuromorphic systems with fully reconfigurable synaptic connectivity and plasticity.Comment: Submitted to BioCAS 201

    Statistically segregated k-space sampling for accelerating multiple-acquisition MRI

    Get PDF
    A central limitation of multiple-acquisition magnetic resonance imaging (MRI) is the degradation in scan efficiency as the number of distinct datasets grows. Sparse recovery techniques can alleviate this limitation via randomly undersampled acquisitions. A frequent sampling strategy is to prescribe for each acquisition a different random pattern drawn from a common sampling density. However, naive random patterns often contain gaps or clusters across the acquisition dimension that in turn can degrade reconstruction quality or reduce scan efficiency. To address this problem, a statistically-segregated sampling method is proposed for multiple-acquisition MRI. This method generates multiple patterns sequentially, while adaptively modifying the sampling density to minimize k-space overlap across patterns. As a result, it improves incoherence across acquisitions while still maintaining similar sampling density across the radial dimension of k-space. Comprehensive simulations and in vivo results are presented for phase-cycled balanced steady-state free precession and multi-echo T2-weighted imaging. Segregated sampling achieves significantly improved quality in both Fourier and compressedsensing reconstructions of multiple-acquisition datasets

    Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning

    Get PDF
    Embedded, continual learning for autonomous and adaptive behavior is a key application of neuromorphic hardware. However, neuromorphic implementations of embedded learning at large scales that are both flexible and efficient have been hindered by a lack of a suitable algorithmic framework. As a result, most neuromorphic hardware are trained off-line on large clusters of dedicated processors or GPUs and transferred post hoc to the device. We address this by introducing the neural and synaptic array transceiver (NSAT), a neuromorphic computational framework facilitating flexible and efficient embedded learning by matching algorithmic requirements and neural and synaptic dynamics. NSAT supports event-driven supervised, unsupervised and reinforcement learning algorithms including deep learning. We demonstrate the NSAT in a wide range of tasks, including the simulation of Mihalas-Niebur neuron, dynamic neural fields, event-driven random back-propagation for event-based deep learning, event-based contrastive divergence for unsupervised learning, and voltage-based learning rules for sequence learning. We anticipate that this contribution will establish the foundation for a new generation of devices enabling adaptive mobile systems, wearable devices, and robots with data-driven autonomy

    Compatibility Between Physical Stimulus Size – Spatial Position and False Recognitions

    Get PDF
    Magnitude processing is of great interest to researchers because it requires integration of quantity related information in memory regardless of whether the focus is numerical or non-numerical magnitudes. The previous work has suggested an interplay between pre-existing semantic information about number–space relationship in processes of encoding and recall. Investigation of the compatibility between physical stimulus size – spatial position and false recognition may provide valuable information about the cognitive representation of non-numerical magnitudes. Therefore, we applied a false memory procedure to a series of non-numerical stimulus pairs. Three versions of the pairs were used: big-right (a big character on the right/a small character on the left), big-left (a big character on the left/a small character on the right), and equal-sized (an equal sized character on each side). In the first phase, participants (N = 100) received 27 pairs, with nine pairs from each experimental condition. In the second phase, nine pairs from each of three stimulus categories were presented: (1) original pairs that were presented in the first phase, (2) mirrored pairs that were horizontally flipped versions of the pairs presented in the first phase, and (3) novel pairs that had not been presented before. The participants were instructed to press “YES” for the pairs that they remembered seeing before and to press “NO” for the pairs that they did not remember from the first phase. The results indicated that the participants made more false-alarm responses by responding “yes” to the pairs with the bigger one on the right. Moreover, they responded to the previously seen figures with the big one on the right faster compared to their distracting counterparts. The study provided evidence for the relationship between stimulus physical size and how they processed spatially by employing a false memory procedure. We offered a size–space compatibility account based on the congruency between the short- and long-term associations which produce local compatibilities. Accordingly, the compatible stimuli in the learning phase might be responsible for the interference, reflecting a possible short-term interference effect on congruency between the short- and long-term associations. Clearly, future research is required to test this speculative position

    Memory-Efficient Synaptic Connectivity for Spike-Timing- Dependent Plasticity

    Get PDF
    Spike-Timing-Dependent Plasticity (STDP) is a bio-inspired local incremental weight update rule commonly used for online learning in spike-based neuromorphic systems. In STDP, the intensity of long-term potentiation and depression in synaptic efficacy (weight) between neurons is expressed as a function of the relative timing between pre- and post-synaptic action potentials (spikes), while the polarity of change is dependent on the order (causality) of the spikes. Online STDP weight updates for causal and acausal relative spike times are activated at the onset of post- and pre-synaptic spike events, respectively, implying access to synaptic connectivity both in forward (pre-to-post) and reverse (post-to-pre) directions. Here we study the impact of different arrangements of synaptic connectivity tables on weight storage and STDP updates for large-scale neuromorphic systems. We analyze the memory efficiency for varying degrees of density in synaptic connectivity, ranging from crossbar arrays for full connectivity to pointer-based lookup for sparse connectivity. The study includes comparison of storage and access costs and efficiencies for each memory arrangement, along with a trade-off analysis of the benefits of each data structure depending on application requirements and budget. Finally, we present an alternative formulation of STDP via a delayed causal update mechanism that permits efficient weight access, requiring no more than forward connectivity lookup. We show functional equivalence of the delayed causal updates to the original STDP formulation, with substantial savings in storage and access costs and efficiencies for networks with sparse synaptic connectivity as typically encountered in large-scale models in computational neuroscience

    Early glycoprotein IIb–IIIa inhibitors in primary angioplasty (EGYPT) cooperation: an individual patient data meta-analysis

    Get PDF
    Background: Even though time-to-treatment has been shown to be a determinant of mortality in primary angioplasty, the potential benefits from early pharmacological reperfusion by glycoprotein (Gp) IIb-IIIa inhibitors are still unclear. The aim of this meta-analysis was to combine individual data from all randomised trials conducted on facilitated primary angioplasty by the use of early Gp IIb-IIIa inhibitors. Methods and results: The literature was scanned by formal searches of electronic databases (MEDLINE, EMBASE) from January 1990 to October 2007. All randomised trials on facilitation by the early administration of Gp IIb-IIIa inhibitors in ST-segment elevation myocardial infarction (STEMI) were examined. No language restrictions were enforced. Individual patient data were obtained from 11 out of 13 trials, including 1662 patients (840 patients (50.5%) randomly assigned to early and 822 patients (49.5%) to late Gp IIb-IIIa inhibitor administration). Preprocedural Thrombolysis in Myocardial Infarction Study (TIMI) grade 3 flow was more frequent with early Gp IIb-IIIa inhibitors. Postprocedural TIMI 3 flow and myocardial blush grade 3 were higher with early Gp IIb IIIa inhibitors but did not reach statistical significance except for abciximab, whereas the rate of complete ST-segment resolution was significantly higher with early Gp IIb-IIIa inhibitors. Mortality was not significantly different between groups, although early abciximab demonstrated improved survival compared with late administration, even after adjustment for clinical and angiographic confounding factors. Conclusions: This meta-analysis shows that pharmacological facilitation with the early administration of Gp IIb IIIa inhibitors in patients undergoing primary angioplasty for STEMI is associated with significant benefits in terms of preprocedural epicardial recanalisation and ST-segment resolution, which translated into non-significant mortality benefits except for abciximab
    • …
    corecore