2,178 research outputs found

    An investigation into adaptive power reduction techniques for neural hardware

    No full text
    In light of the growing applicability of Artificial Neural Network (ANN) in the signal processing field [1] and the present thrust of the semiconductor industry towards lowpower SOCs for mobile devices [2], the power consumption of ANN hardware has become a very important implementation issue. Adaptability is a powerful and useful feature of neural networks. All current approaches for low-power ANN hardware techniques are ‘non-adaptive’ with respect to the power consumption of the network (i.e. power-reduction is not an objective of the adaptation/learning process). In the research work presented in this thesis, investigations on possible adaptive power reduction techniques have been carried out, which attempt to exploit the adaptability of neural networks in order to reduce the power consumption. Three separate approaches for such adaptive power reduction are proposed: adaptation of size, adaptation of network weights and adaptation of calculation precision. Initial case studies exhibit promising results with significantpower reduction

    Characterization of Retinal Ganglion Cell Responses to Electrical Stimulation Using White Noise

    Get PDF
    Retinitis pigmentosa and age-related macular degeneration are two leading causes of degenerative blindness. While there is still not a definitive course of treatment for either of these diseases, there is currently the world over, many different treatment strategies being explored. Of these various strategies, one of the most successful has been retinal implants. Retinal implants are microelectrode or photodiode arrays, that are implanted in the eye of a patient, to electrically stimulate the degenerating retina. Clinical trials have shown that many patients implanted with such a device, are able to regain a certain degree of functional vision. However, while the results of these ongoing clinical trials have been promising, there are still many technical challenges that need to be overcome. One of the biggest challenges facing present implants is the inability to preferentially stimulate different retinal pathways. This is because retinal implants use large-amplitude current or voltage pulses. This in turn leads to the indiscriminate activation of multiple classes of retinal ganglion cells (RGCs), and therefore, an overall reduction in the restored visual acuity. To tackle this issue, we decided to explore a novel stimulus paradigm, in which we present to the retina, a stream of smaller-amplitude subthreshold voltage pulses. By then correlating the retinal spikes to the stimuli preceding them, we calculate temporal input filters for various classes of RGCs, using a technique called spike-triggered averaging (STA). In doing this, we found that ON and OFF RGCs have electrical filters, which are very distinct from each other. This finding creates the possibility for the selective activation of the retina through the use of STA-based waveforms. Finally, using statistical models, we verify how well these temporal filters can predict RGC responses to novel electrical stimuli. In a broad sense, our work represents the successful application of systems engineering tools to retinal prosthetics, in an attempt to answer one of the field’s most difficult questions, namely selective stimulation of the retina

    Neuromorphic Engineering Editors' Pick 2021

    Get PDF
    This collection showcases well-received spontaneous articles from the past couple of years, which have been specially handpicked by our Chief Editors, Profs. AndrĂ© van Schaik and BernabĂ© Linares-Barranco. The work presented here highlights the broad diversity of research performed across the section and aims to put a spotlight on the main areas of interest. All research presented here displays strong advances in theory, experiment, and methodology with applications to compelling problems. This collection aims to further support Frontiers’ strong community by recognizing highly deserving authors

    Information processing in dissociated neuronal cultures of rat hippocampal neurons

    Get PDF
    One of the major aims of Systems Neuroscience is to understand how the nervous system transforms sensory inputs into appropriate motor reactions. In very simple cases sensory neurons are immediately coupled to motoneurons and the entire transformation becomes a simple reflex, in which a noxious signal is immediately transformed into an escape reaction. However, in the most complex behaviours, the nervous system seems to analyse in detail the sensory inputs and is performing some kind of information processing (IP). IP takes place at many different levels of the nervous system: from the peripheral nervous system, where sensory stimuli are detected and converted into electrical pulses, to the central nervous system, where features of sensory stimuli are extracted, perception takes place and actions and motions are coordinated. Moreover, understanding the basic computational properties of the nervous system, besides being at the core of Neuroscience, also arouses great interest even in the field of Neuroengineering and in the field of Computer Science. In fact, being able to decode the neural activity can lead to the development of a new generation of neuroprosthetic devices aimed, for example, at restoring motor functions in severely paralysed patients (Chapin, 2004). On the other side, the development of Artificial Neural Networks (ANNs) (Marr, 1982; Rumelhart & McClelland, 1988; Herz et al., 1981; Hopfield, 1982; Minsky & Papert, 1988) has already proved that the study of biological neural networks may lead to the development and to the design of new computing algorithms and devices. All nervous systems are based on the same elements, the neurons, which are computing devices which, compared to silicon components, are much slower and much less reliable. How are nervous systems of all living species able to survive being based on slow and poorly reliable components? This obvious and na\uefve question is equivalent to characterizing IP in a more quantitative way. In order to study IP and to capture the basic computational properties of the nervous system, two major questions seem to arise. Firstly, which is the fundamental unit of information processing: 2 single neurons or neuronal ensembles? Secondly, how is information encoded in the neuronal firing? These questions - in my view - summarize the problem of the neural code. The subject of my PhD research was to study information processing in dissociated neuronal cultures of rat hippocampal neurons. These cultures, with random connections, provide a more general view of neuronal networks and assemblies, not depending on the circuitry of a neuronal network in vivo, and allow a more detailed and careful experimental investigation. In order to record the activity of a large ensemble of neurons, these neurons were cultured on multielectrode arrays (MEAs) and multi-site stimulation was used to activate different neurons and pathways of the network. In this way, it was possible to vary the properties of the stimulus applied under a controlled extracellular environment. Given this experimental system, my investigation had two major approaches. On one side, I focused my studies on the problem of the neural code, where I studied in particular information processing at the single neuron level and at an ensemble level, investigating also putative neural coding mechanisms. On the other side, I tried to explore the possibility of using biological neurons as computing elements in a task commonly solved by conventional silicon devices: image processing and pattern recognition. The results reported in the first two chapters of my thesis have been published in two separate articles. The third chapter of my thesis represents an article in preparation

    Neuronal cell signal analysis: spike detection algorithm development for microelectrode array recordings

    Get PDF
    Neural signal acquisition and processing techniques are rising trends among wide scientific and commercial areas. Microelectrode array (MEA) technology makes it possible to access and record the electrical activity of neural cells. In this work, human pluripotent stem cell (hPSC) -derived neuronal populations were grown on MEA plates. The activity of the cells was recorded and the research about modern signal processing methods for the neural spike detection was performed. A list of approaches was selected for detailed investigation and the most efficient one was chosen as the new technique for permanent use in the research group. The performed laboratory activities involved cell culture plating, regular medium changes, spontaneous activity recordings and pharmacological manipulations. The data acquired from pharmacological experiments were used for the comparison between the old and new spike detection algorithms in terms of the numbers of the detected events. The Stationary Wavelet Transform-based Teager Energy Operator (SWTTEO) shows prominent performance in the tests with synthetic data. The use of the proposed algorithm in conjunction with the common amplitude-based thresholding enables to lower the threshold and to detect more spikes without an excessive number of false positives. This mode is applicable for real cell data. The detection method was considered superior and was further distributed for the processing of all neural data of the research group which include signals acquired from neuronal populations derived from human embryonic and induced pluripotent stem cells (hESCs and iPSCs) as well as rat cells

    Dynamical principles in neuroscience

    Full text link
    Dynamical modeling of neural systems and brain functions has a history of success over the last half century. This includes, for example, the explanation and prediction of some features of neural rhythmic behaviors. Many interesting dynamical models of learning and memory based on physiological experiments have been suggested over the last two decades. Dynamical models even of consciousness now exist. Usually these models and results are based on traditional approaches and paradigms of nonlinear dynamics including dynamical chaos. Neural systems are, however, an unusual subject for nonlinear dynamics for several reasons: (i) Even the simplest neural network, with only a few neurons and synaptic connections, has an enormous number of variables and control parameters. These make neural systems adaptive and flexible, and are critical to their biological function. (ii) In contrast to traditional physical systems described by well-known basic principles, first principles governing the dynamics of neural systems are unknown. (iii) Many different neural systems exhibit similar dynamics despite having different architectures and different levels of complexity. (iv) The network architecture and connection strengths are usually not known in detail and therefore the dynamical analysis must, in some sense, be probabilistic. (v) Since nervous systems are able to organize behavior based on sensory inputs, the dynamical modeling of these systems has to explain the transformation of temporal information into combinatorial or combinatorial-temporal codes, and vice versa, for memory and recognition. In this review these problems are discussed in the context of addressing the stimulating questions: What can neuroscience learn from nonlinear dynamics, and what can nonlinear dynamics learn from neuroscience?This work was supported by NSF Grant No. NSF/EIA-0130708, and Grant No. PHY 0414174; NIH Grant No. 1 R01 NS50945 and Grant No. NS40110; MEC BFI2003-07276, and FundaciĂłn BBVA
    • 

    corecore