664 research outputs found

    On-Sensor Data Filtering using Neuromorphic Computing for High Energy Physics Experiments

    Full text link
    This work describes the investigation of neuromorphic computing-based spiking neural network (SNN) models used to filter data from sensor electronics in high energy physics experiments conducted at the High Luminosity Large Hadron Collider. We present our approach for developing a compact neuromorphic model that filters out the sensor data based on the particle's transverse momentum with the goal of reducing the amount of data being sent to the downstream electronics. The incoming charge waveforms are converted to streams of binary-valued events, which are then processed by the SNN. We present our insights on the various system design choices - from data encoding to optimal hyperparameters of the training algorithm - for an accurate and compact SNN optimized for hardware deployment. Our results show that an SNN trained with an evolutionary algorithm and an optimized set of hyperparameters obtains a signal efficiency of about 91% with nearly half as many parameters as a deep neural network.Comment: Manuscript accepted at ICONS'2

    Concepts and Paradigms for Neuromorphic Programming

    Full text link
    The value of neuromorphic computers depends crucially on our ability to program them for relevant tasks. Currently, neuromorphic computers are mostly limited to machine learning methods adapted from deep learning. However, neuromorphic computers have potential far beyond deep learning if we can only make use of their computational properties to harness their full power. Neuromorphic programming will necessarily be different from conventional programming, requiring a paradigm shift in how we think about programming in general. The contributions of this paper are 1) a conceptual analysis of what "programming" means in the context of neuromorphic computers and 2) an exploration of existing programming paradigms that are promising yet overlooked in neuromorphic computing. The goal is to expand the horizon of neuromorphic programming methods, thereby allowing researchers to move beyond the shackles of current methods and explore novel directions

    The Synthesis of Memristive Neuromorphic Circuits

    Get PDF
    As Moores Law has come to a halt, it has become necessary to explore alternative forms of computation that are not limited in the same ways as traditional CMOS technologies and the Von Neumann architecture. Neuromorphic computing, computing inspired by the human brain with neurons and synapses, has been proposed as one of these alternatives. Memristors, non-volatile devices with adjustable resistances, have emerged as a candidate for implementing neuromorphic computing systems because of their low power and low area overhead. This work presents a C++ simulator for an implementation of a memristive neuromorphic circuit. The simulator is used within a software framework to design and evaluate these circuits. The first chapter provides a background on neuromorphic computing and memristors, explores other neuromorphic circuits and their programming models, and finally presents the software framework for which the simulator was developed. The second chapter presents the C++ simulator and the genetic operators used in the generation of the memristive neuromorphic networks. Next, the third chapter presents a verification of the accuracy of the simulator, and provides some analysis of designs. These analyses focus on variation, the Axon-Hillock neuron model, limited programming resolutions, and online learning mechanisms. Finally, the fourth chapter discusses future considerations. Thus, this thesis presents the C++ simulator as a tool to generate memristive neuromorphic networks. Additionally, it shows how the simulator can be used to understand how the underlying hardware impacts the application level performance of the network

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    The Islands Project for Managing Populations in Genetic Training of Spiking Neural Networks

    Get PDF
    The TENNLab software framework enables researchers to explore spiking neuroprocessors, neuromorphic applications and how they are trained. The centerpiece of training in TENNLab has been a genetic algorithm called Evolutionary Optimization For Neuromorphic System (EONS). EONS optimizes a single population of spiking neural networks, and heretofore, many methods to train with multiple populations have been ad hoc, typically consisting of shell scripts that execute multiple independent EONS jobs, whose results are combined and analyzed in another ad hoc fashion. The Islands project seeks to manage and manipulate multiple EONS populations in a controlled way. With Islands, one may spawn off independent EONS populations, each of which is an “Island.” One may define characteristics of a “stagnated” island, where further optimization is unlikely to improve the fitness of the population on the island. The Island software then allows one to create new islands by combining stagnated islands, or to migrate populations from one island to others, all in an attempt to increase diversity among the populations to improve their fitness. This thesis describes the software structure of Islands, its interface, and the functionalities that it implements. We then perform a case study with three neuromorphic control applications that demonstrate the wide variety of features of Islands
    corecore