30 research outputs found

    Skip Connections in Spiking Neural Networks: An Analysis of Their Effect on Network Training

    Full text link
    Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs) due to their potential for energy efficiency and their ability to model spiking behavior in biological systems. However, the training of SNNs is still a challenging problem, and new techniques are needed to improve their performance. In this paper, we study the impact of skip connections on SNNs and propose a hyperparameter optimization technique that adapts models from ANN to SNN. We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs by enabling faster convergence and increasing information flow through the network. Our results show an average +8% accuracy increase on CIFAR-10-DVS and DVS128 Gesture datasets adaptation of multiple state-of-the-art models.Comment: IPDPSW Scalable Deep Learning 202

    NeuroBench:Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

    Get PDF
    The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics

    NeuroBench: Advancing Neuromorphic Computing through Collaborative, Fair and Representative Benchmarking

    Full text link
    The field of neuromorphic computing holds great promise in terms of advancing computing efficiency and capabilities by following brain-inspired principles. However, the rich diversity of techniques employed in neuromorphic research has resulted in a lack of clear standards for benchmarking, hindering effective evaluation of the advantages and strengths of neuromorphic methods compared to traditional deep-learning-based methods. This paper presents a collaborative effort, bringing together members from academia and the industry, to define benchmarks for neuromorphic computing: NeuroBench. The goals of NeuroBench are to be a collaborative, fair, and representative benchmark suite developed by the community, for the community. In this paper, we discuss the challenges associated with benchmarking neuromorphic solutions, and outline the key features of NeuroBench. We believe that NeuroBench will be a significant step towards defining standards that can unify the goals of neuromorphic computing and drive its technological progress. Please visit neurobench.ai for the latest updates on the benchmark tasks and metrics

    NeuroBench:A Framework for Benchmarking Neuromorphic Computing Algorithms and Systems

    Get PDF
    Neuromorphic computing shows promise for advancing computing efficiency and capabilities of AI applications using brain-inspired principles. However, the neuromorphic research field currently lacks standardized benchmarks, making it difficult to accurately measure technological advancements, compare performance with conventional methods, and identify promising future research directions. Prior neuromorphic computing benchmark efforts have not seen widespread adoption due to a lack of inclusive, actionable, and iterative benchmark design and guidelines. To address these shortcomings, we present NeuroBench: a benchmark framework for neuromorphic computing algorithms and systems. NeuroBench is a collaboratively-designed effort from an open community of nearly 100 co-authors across over 50 institutions in industry and academia, aiming to provide a representative structure for standardizing the evaluation of neuromorphic approaches. The NeuroBench framework introduces a common set of tools and systematic methodology for inclusive benchmark measurement, delivering an objective reference framework for quantifying neuromorphic approaches in both hardware-independent (algorithm track) and hardware-dependent (system track) settings. In this article, we present initial performance baselines across various model architectures on the algorithm track and outline the system track benchmark tasks and guidelines. NeuroBench is intended to continually expand its benchmarks and features to foster and track the progress made by the research community

    Best Poster Content Prize

    No full text
    Prize for the best Poster Content valued at 150€. The poster presentation was held at the MatNeC 2022 conference

    A brain inspired sequence learning algorithm and foundations of a memristive hardware implementation

    No full text
    The brain uses intricate biological mechanisms and principles to solve a variety of tasks. These principles endow systems with self-learning capabilities, efficient energy usage, and high storage capacity. A core concept that lies at the heart of brain computation is sequence prediction and replay. This form of computation is essential for almost all our daily tasks such as movement generation, perception, and language. Understanding how the brain performs such a computation advances neuroscience and paves the way for new technological brain-inspired applications. In the first part of this thesis, we propose a sequence learning model that explains how biological networks learn to predict upcoming elements, signal non-anticipated events, and recall sequences in response to a cue signal. The model accounts for anatomical and electrophysiological properties of cortical neuronal circuits, and learns complex sequences in an unsupervised manner by means of known biological plasticity and homeostatic control mechanisms. After learning, it self-organizes into a configuration characterized by a high degree of sparsity in connectivity and activity allowing for both high storage capacity and efficient energy usage. In the second part, we extend the sequence learning model such that it permits probabilistic sequential memory recall in response to ambiguous cues. In the absence of noise, the model deterministically recalls the sequence shown most frequently during training. We investigate how different forms of noise give rise to more exploratory behavior. We show that uncorrelated noise averages out in population based encoding leading to non-exploratory dynamics. Locally coherent noise in the form of random stimulus locking to spatiotemporal oscillations addresses this issue. Our results show that depending on the amplitude and frequency of oscillation, the network can recall learned sequences according to different strategies: either always replay the most frequent sequence, or replay sequences according to their occurrence probability during training. The study contributes to an understanding of the neuronal mechanisms underlying different decision strategies in the face of ambiguity, and highlights the role of coherent network activity during sequential memory recall. Finally, we investigate the feasibility of implementing the sequence learning model on dedicated hardware mimicking brain properties. Here, we focus on a type of hardware where synapses are emulated by memristive devices. As a first step in this direction, we replace the synapse dynamics of the original model with dynamics describing the phenomenological behavior of memristive elements, and demonstrate resilience with respect to different device characteristics. In this thesis, we further describe how the sequence learning model can be adapted at the algorithmic level to foster an implementation in a full electronic circuit centered around a memristive crossbar array. Overall, this thesis sheds light on the key mechanisms underlying sequence learning, prediction, and replay in biological networks and demonstrates the feasibility of implementing this type of computation on neuromorphic hardware

    Learning and replaying spatiotemporal sequences: A replication study

    No full text
    Learning and replaying spatiotemporal sequences are fundamental computations performed by the brain and specifically the neocortex. These features are critical for a wide variety of cognitive functions, including sensory perception and the execution of motor and language skills. Although several computational models demonstrate this capability, many are either hard to reconcile with biological findings or have limited functionality. To address this gap, a recent study proposed a biologically plausible model based on a spiking recurrent neural network supplemented with read-out neurons. After learning, the recurrent network develops precise switching dynamics by successively activating and deactivating small groups of neurons. The read-out neurons are trained to respond to particular groups and can thereby reproduce the learned sequence. For the model to serve as the basis for further research, it is important to determine its replicability. In this Brief Report, we give a detailed description of the model and identify missing details, inconsistencies or errors in or between the original paper and its reference implementation. We re-implement the full model in the neural simulator NEST in conjunction with the NESTML modeling language and confirm the main findings of the original work
    corecore