15 research outputs found

    A Reconfigurable Mixed-signal Implementation of a Neuromorphic ADC

    Full text link
    We present a neuromorphic Analogue-to-Digital Converter (ADC), which uses integrate-and-fire (I&F) neurons as the encoders of the analogue signal, with modulated inhibitions to decohere the neuronal spikes trains. The architecture consists of an analogue chip and a control module. The analogue chip comprises two scan chains and a twodimensional integrate-and-fire neuronal array. Individual neurons are accessed via the chains one by one without any encoder decoder or arbiter. The control module is implemented on an FPGA (Field Programmable Gate Array), which sends scan enable signals to the scan chains and controls the inhibition for individual neurons. Since the control module is implemented on an FPGA, it can be easily reconfigured. Additionally, we propose a pulse width modulation methodology for the lateral inhibition, which makes use of different pulse widths indicating different strengths of inhibition for each individual neuron to decohere neuronal spikes. Software simulations in this paper tested the robustness of the proposed ADC architecture to fixed random noise. A circuit simulation using ten neurons shows the performance and the feasibility of the architecture.Comment: BioCAS-201

    Real-Time Decoding of an Integrate and Fire Encoder

    Get PDF
    Neuronal encoding models range from the detailed biophysically-based Hodgkin Huxley model, to the statistical linear time invariant model specifying firing rates in terms of the extrinsic signal. Decoding the former becomes intractable, while the latter does not adequately capture the nonlinearities present in the neuronal encoding system. For use in practical applications, we wish to record the output of neurons, namely spikes, and decode this signal fast in order to act on this signal, for example to drive a prosthetic device. Here, we introduce a causal, real-time decoder of the biophysically-based Integrate and Fire encoding neuron model. We show that the upper bound of the real-time reconstruction error decreases polynomially in time, and that the L[subscript 2] norm of the error is bounded by a constant that depends on the density of the spikes, as well as the bandwidth and the decay of the input signal. We numerically validate the effect of these parameters on the reconstruction error.National Science Foundation (U.S.) (Emerging Frontiers in Research and Innovation Grant 1137237

    Neuromorphic Sampling of Signals in Shift-Invariant Spaces

    Full text link
    Neuromorphic sampling is a paradigm shift in analog-to-digital conversion where the acquisition strategy is opportunistic and measurements are recorded only when there is a significant change in the signal. Neuromorphic sampling has given rise to a new class of event-based sensors called dynamic vision sensors or neuromorphic cameras. The neuromorphic sampling mechanism utilizes low power and provides high-dynamic range sensing with low latency and high temporal resolution. The measurements are sparse and have low redundancy making it convenient for downstream tasks. In this paper, we present a sampling-theoretic perspective to neuromorphic sensing of continuous-time signals. We establish a close connection between neuromorphic sampling and time-based sampling - where signals are encoded temporally. We analyse neuromorphic sampling of signals in shift-invariant spaces, in particular, bandlimited signals and polynomial splines. We present an iterative technique for perfect reconstruction subject to the events satisfying a density criterion. We also provide necessary and sufficient conditions for perfect reconstruction. Owing to practical limitations in meeting the sufficient conditions for perfect reconstruction, we extend the analysis to approximate reconstruction from sparse events. In the latter setting, we pose signal reconstruction as a continuous-domain linear inverse problem whose solution can be obtained by solving an equivalent finite-dimensional convex optimization program using a variable-splitting approach. We demonstrate the performance of the proposed algorithm and validate our claims via experiments on synthetic signals

    Neuromorphic Sampling of Sparse Signals

    Full text link
    Neuromorphic sampling is a bioinspired and opportunistic analog-to-digital conversion technique, where the measurements are recorded only when there is a significant change in the signal amplitude. Neuromorphic sampling has paved the way for a new class of vision sensors called event cameras or dynamic vision sensors (DVS), which consume low power, accommodate a high-dynamic range, and provide sparse measurements with high temporal resolution making it convenient for downstream inference tasks. In this paper, we consider neuromorphic sensing of signals with a finite rate of innovation (FRI), including a stream of Dirac impulses, sum of weighted and time-shifted pulses, and piecewise-polynomial functions. We consider a sampling-theoretic approach and leverage the close connection between neuromorphic sensing and time-based sampling, where the measurements are encoded temporally. Using Fourier-domain analysis, we show that perfect signal reconstruction is possible via parameter estimation using high-resolution spectral estimation methods. We develop a kernel-based sampling approach, which allows for perfect reconstruction with a sample complexity equal to the rate of innovation of the signal. We provide sufficient conditions on the parameters of the neuromorphic encoder for perfect reconstruction. Furthermore, we extend the analysis to multichannel neuromorphic sampling of FRI signals, in the single-input multi-output (SIMO) and multi-input multi-output (MIMO) configurations. We show that the signal parameters can be jointly estimated using multichannel measurements. Experimental results are provided to substantiate the theoretical claims

    Operation and reconstruction of signals based on integrate-and-fire conversion using FPGA

    Get PDF
    Neurons are diverse in terms of functionality and behaviour; however a family of them can be modeled as an electrical circuit aggregating input currents from neighboring neurons. Integration-and-Fire Neuron (IFN) is a simplified view of a neuron operation that reduces the model to a first order fire-and-reset system. IFN can be used as a modulation technique to transmit data in a very efficient way. It is also possible to perform some signal operations in the IFN space, just looking into the time space in-between IFN spikes, and there are several approaches to reconstruct back the analog signal, at the receiver side, either in time-domain or frequency-domain. After the analysis of the different methods for signal reconstruction based on IFN modulation technique, and also after surveying different methods of doing mathematical operations in the IFN space, the main goal of the thesis is then to synthesize, on an FPGA, the fundamental mathematical operations over the spike based signals, together with a reconstruction method that will convert the IFN signal back to a magnitude/time signal, with a reasonable bit resolution

    Leveraging spiking deep neural networks to understand the neural mechanisms underlying selective attention

    Get PDF
    Spatial attention enhances sensory processing of goalrelevant information and improves perceptual sensitivity. Yet, the specific neural mechanisms underlying the effects of spatial attention on performance are still contested. Here, we examine different attention mechanisms in spiking deep convolutional neural networks. We directly contrast effects of precision (internal noise suppression) and two different gain modulation mechanisms on performance on a visual search task with complex real-world images. Unlike standard artificial neurons, biological neurons have saturating activation functions, permitting implementation of attentional gain as gain on a neuron’s input or on its outgoing connection. We show that modulating the connection is most effective in selectively enhancing information processing by redistributing spiking activity and by introducing additional task-relevant information, as shown by representational similarity analyses. Precision only produced minor attentional effects in performance. Our results, which mirror empirical findings, show that it is possible to adjudicate between attention mechanisms using more biologically realistic models and natural stimuli
    corecore