496 research outputs found

    Dynamic Slicing for Deep Neural Networks

    Full text link
    Program slicing has been widely applied in a variety of software engineering tasks. However, existing program slicing techniques only deal with traditional programs that are constructed with instructions and variables, rather than neural networks that are composed of neurons and synapses. In this paper, we propose NNSlicer, the first approach for slicing deep neural networks based on data flow analysis. Our method understands the reaction of each neuron to an input based on the difference between its behavior activated by the input and the average behavior over the whole dataset. Then we quantify the neuron contributions to the slicing criterion by recursively backtracking from the output neurons, and calculate the slice as the neurons and the synapses with larger contributions. We demonstrate the usefulness and effectiveness of NNSlicer with three applications, including adversarial input detection, model pruning, and selective model protection. In all applications, NNSlicer significantly outperforms other baselines that do not rely on data flow analysis.Comment: 11 pages, ESEC/FSE '2

    A Scalable Flash-Based Hardware Architecture for the Hierarchical Temporal Memory Spatial Pooler

    Get PDF
    Hierarchical temporal memory (HTM) is a biomimetic machine learning algorithm focused upon modeling the structural and algorithmic properties of the neocortex. It is comprised of two components, realizing pattern recognition of spatial and temporal data, respectively. HTM research has gained momentum in recent years, leading to both hardware and software exploration of its algorithmic formulation. Previous work on HTM has centered on addressing performance concerns; however, the memory-bound operation of HTM presents significant challenges to scalability. In this work, a scalable flash-based storage processor unit, Flash-HTM (FHTM), is presented along with a detailed analysis of its potential scalability. FHTM leverages SSD flash technology to implement the HTM cortical learning algorithm spatial pooler. The ability for FHTM to scale with increasing model complexity is addressed with respect to design footprint, memory organization, and power efficiency. Additionally, a mathematical model of the hardware is evaluated against the MNIST dataset, yielding 91.98% classification accuracy. A fully custom layout is developed to validate the design in a TSMC 180nm process. The area and power footprints of the spatial pooler are 30.538mm2 and 5.171mW, respectively. Storage processor units have the potential to be viable platforms to support implementations of HTM at scale

    Simulation of networks of spiking neurons: A review of tools and strategies

    Full text link
    We review different aspects of the simulation of spiking neural networks. We start by reviewing the different types of simulation strategies and algorithms that are currently implemented. We next review the precision of those simulation strategies, in particular in cases where plasticity depends on the exact timing of the spikes. We overview different simulators and simulation environments presently available (restricted to those freely available, open source and documented). For each simulation tool, its advantages and pitfalls are reviewed, with an aim to allow the reader to identify which simulator is appropriate for a given task. Finally, we provide a series of benchmark simulations of different types of networks of spiking neurons, including Hodgkin-Huxley type, integrate-and-fire models, interacting with current-based or conductance-based synapses, using clock-driven or event-driven integration strategies. The same set of models are implemented on the different simulators, and the codes are made available. The ultimate goal of this review is to provide a resource to facilitate identifying the appropriate integration strategy and simulation tool to use for a given modeling problem related to spiking neural networks.Comment: 49 pages, 24 figures, 1 table; review article, Journal of Computational Neuroscience, in press (2007

    Neural networks-on-chip for hybrid bio-electronic systems

    Get PDF
    PhD ThesisBy modelling the brains computation we can further our understanding of its function and develop novel treatments for neurological disorders. The brain is incredibly powerful and energy e cient, but its computation does not t well with the traditional computer architecture developed over the previous 70 years. Therefore, there is growing research focus in developing alternative computing technologies to enhance our neural modelling capability, with the expectation that the technology in itself will also bene t from increased awareness of neural computational paradigms. This thesis focuses upon developing a methodology to study the design of neural computing systems, with an emphasis on studying systems suitable for biomedical experiments. The methodology allows for the design to be optimized according to the application. For example, di erent case studies highlight how to reduce energy consumption, reduce silicon area, or to increase network throughput. High performance processing cores are presented for both Hodgkin-Huxley and Izhikevich neurons incorporating novel design features. Further, a complete energy/area model for a neural-network-on-chip is derived, which is used in two exemplar case-studies: a cortical neural circuit to benchmark typical system performance, illustrating how a 65,000 neuron network could be processed in real-time within a 100mW power budget; and a scalable highperformance processing platform for a cerebellar neural prosthesis. From these case-studies, the contribution of network granularity towards optimal neural-network-on-chip performance is explored

    The neural marketplace

    Get PDF
    The `retroaxonal hypothesis' (Harris, 2008) posits a role for slow retrograde signalling in learning. It is based on the intuition that cells with strong output synapses tend to be those that encode useful information; and that cells which encode useful information should not modify their input synapses too readily. The hypothesis has two parts: rst, that the stronger a cell's output synapses, the less likely it is to change its input synapses; and second, that a cell is more likely to revert changes to its input synapses when the changes are followed by weakening of its output synapses. It is motivated in part by analogy between a neural network and a market economy, viewing neurons as `entrepreneurs' who `sell' spike trains to each other. In this view, the slow retrograde signals which tell a neuron that it has strong output synapses are `money' and imply that what it produces is useful. This thesis constructs a mathematical model of learning, which validates the intuition of the retroaxonal hypothesis. In this model, we show that neurons can estimate their usefulness, or `worth', from the magnitude of their output weights. We also show that by making each cell's input synapses more or less plastic according to its worth, the performance of a network can be improved.Open Acces

    Bio-inspired Neuromorphic Computing Using Memristor Crossbar Networks

    Full text link
    Bio-inspired neuromorphic computing systems built with emerging devices such as memristors have become an active research field. Experimental demonstrations at the network-level have suggested memristor-based neuromorphic systems as a promising candidate to overcome the von-Neumann bottleneck in future computing applications. As a hardware system that offers co-location of memory and data processing, memristor-based networks represent an efficient computing platform with minimal data transfer and high parallelism. Furthermore, active utilization of the dynamic processes during resistive switching in memristors can help realize more faithful emulation of biological device and network behaviors, with the potential to process dynamic temporal inputs efficiently. In this thesis, I present experimental demonstrations of neuromorphic systems using fabricated memristor arrays as well as network-level simulation results. Models of resistive switching behavior in two types of memristor devices, conventional first-order and recently proposed second-order memristor devices, will be first introduced. Secondly, experimental demonstration of K-means clustering through unsupervised learning in a memristor network will be presented. The memristor based hardware systems achieved high classification accuracy (93.3%) on the standard IRIS data set, suggesting practical networks can be built with optimized memristor devices. Thirdly, implementation of a partial differential equation (PDE) solver in memristor arrays will be discussed. This work expands the capability of memristor-based computing hardware from ‘soft’ to ‘hard’ computing tasks, which require very high precision and accurate solutions. In general first-order memristors are suitable to perform tasks that are based on vector-matrix multiplications, ranging from K-means clustering to PDE solvers. On the other hand, utilizing internal device dynamics in second-order memristors can allow natural emulation of biological behaviors and enable network functions such as temporal data processing. An effort to explore second-order memristor devices and their network behaviors will be discussed. Finally, we propose ideas to build large-size passive memristor crossbar arrays, including fabrication approaches, guidelines of device structure, and analysis of the parasitic effects in larger arrays.PHDElectrical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/147610/1/yjjeong_1.pd

    Stochastic resonance and finite resolution in a network of leaky integrate-and-fire neurons.

    Get PDF
    This thesis is a study of stochastic resonance (SR) in a discrete implementation of a leaky integrate-and-fire (LIF) neuron network. The aim was to determine if SR can be realised in limited precision discrete systems implemented on digital hardware. How neuronal modelling connects with SR is discussed. Analysis techniques for noisy spike trains are described, ranging from rate coding, statistical measures, and signal processing measures like power spectrum and signal-to-noise ratio (SNR). The main problem in computing spike train power spectra is how to get equi-spaced sample amplitudes given the short duration of spikes relative to their frequency. Three different methods of computing the SNR of a spike train given its power spectrum are described. The main problem is how to separate the power at the frequencies of interest from the noise power as the spike train encodes both noise and the signal of interest. Two models of the LIF neuron were developed, one continuous and one discrete, and the results compared. The discrete model allowed variation of the precision of the simulation values allowing investigation of the effect of precision limitation on SR. The main difference between the two models lies in the evolution of the membrane potential. When both models are allowed to decay from a high start value in the absence of input, the discrete model does not completely discharge while the continuous model discharges to almost zero. The results of simulating the discrete model on an FPGA and the continuous model on a PC showed that SR can be realised in discrete low resolution digital systems. SR was found to be sensitive to the precision of the values in the simulations. For a single neuron, we find that SR increases between 10 bits and 12 bits resolution after which it saturates. For a feed-forward network with multiple input neurons and one output neuron, SR is stronger with more than 6 input neurons and it saturates at a higher resolution. We conclude that stochastic resonance can manifest in discrete systems though to a lesser extent compared to continuous systems
    • …
    corecore