859 research outputs found

    Unveiling the intrinsic dynamics of biological and artificial neural networks: from criticality to optimal representations

    Full text link
    Deciphering the underpinnings of the dynamical processes leading to information transmission, processing, and storing in the brain is a crucial challenge in neuroscience. An inspiring but speculative theoretical idea is that such dynamics should operate at the brink of a phase transition, i.e., at the edge between different collective phases, to entail a rich dynamical repertoire and optimize functional capabilities. In recent years, research guided by the advent of high-throughput data and new theoretical developments has contributed to making a quantitative validation of such a hypothesis. Here we review recent advances in this field, stressing our contributions. In particular, we use data from thousands of individually recorded neurons in the mouse brain and tools such as a phenomenological renormalization group analysis, theory of disordered systems, and random matrix theory. These combined approaches provide novel evidence of quasi-universal scaling and near-critical behavior emerging in different brain regions. Moreover, we design artificial neural networks under the reservoir-computing paradigm and show that their internal dynamical states become near critical when we tune the networks for optimal performance. These results not only open new perspectives for understanding the ultimate principles guiding brain function but also towards the development of brain-inspired, neuromorphic computation

    Multiplex visibility graphs to investigate recurrent neural network dynamics

    Get PDF
    Source at https://doi.org/10.1038/srep44037 .A recurrent neural network (RNN) is a universal approximator of dynamical systems, whose performance often depends on sensitive hyperparameters. Tuning them properly may be difficult and, typically, based on a trial-and-error approach. In this work, we adopt a graph-based framework to interpret and characterize internal dynamics of a class of RNNs called echo state networks (ESNs). We design principled unsupervised methods to derive hyperparameters configurations yielding maximal ESN performance, expressed in terms of prediction error and memory capacity. In particular, we propose to model time series generated by each neuron activations with a horizontal visibility graph, whose topological properties have been shown to be related to the underlying system dynamics. Successively, horizontal visibility graphs associated with all neurons become layers of a larger structure called a multiplex. We show that topological properties of such a multiplex reflect important features of ESN dynamics that can be used to guide the tuning of its hyperparamers. Results obtained on several benchmarks and a real-world dataset of telephone call data records show the effectiveness of the proposed methods

    Neuromorphic computing using wavelength-division multiplexing

    Full text link
    Optical neural networks (ONNs), or optical neuromorphic hardware accelerators, have the potential to dramatically enhance the computing power and energy efficiency of mainstream electronic processors, due to their ultralarge bandwidths of up to 10s of terahertz together with their analog architecture that avoids the need for reading and writing data back and forth. Different multiplexing techniques have been employed to demonstrate ONNs, amongst which wavelength division multiplexing (WDM) techniques make sufficient use of the unique advantages of optics in terms of broad bandwidths. Here, we review recent advances in WDM based ONNs, focusing on methods that use integrated microcombs to implement ONNs. We present results for human image processing using an optical convolution accelerator operating at 11 Tera operations per second. The open challenges and limitations of ONNs that need to be addressed for future applications are also discussed.Comment: 13 pages, 8 figures, 160 reference

    A Physics-Informed, Deep Double Reservoir Network for Forecasting Boundary Layer Velocity

    Full text link
    When a fluid flows over a solid surface, it creates a thin boundary layer where the flow velocity is influenced by the surface through viscosity, and can transition from laminar to turbulent at sufficiently high speeds. Understanding and forecasting the wind dynamics under these conditions is one of the most challenging scientific problems in fluid dynamics. It is therefore of high interest to formulate models able to capture the nonlinear spatio-temporal velocity structure as well as produce forecasts in a computationally efficient manner. Traditional statistical approaches are limited in their ability to produce timely forecasts of complex, nonlinear spatio-temporal structures which are at the same time able to incorporate the underlying flow physics. In this work, we propose a model to accurately forecast boundary layer velocities with a deep double reservoir computing network which is capable of capturing the complex, nonlinear dynamics of the boundary layer while at the same time incorporating physical constraints via a penalty obtained by a Partial Differential Equation (PDE). Simulation studies on a one-dimensional viscous fluid demonstrate how the proposed model is able to produce accurate forecasts while simultaneously accounting for energy loss. The application focuses on boundary layer data on a wind tunnel with a PDE penalty derived from an appropriate simplification of the Navier-Stokes equations, showing forecasts more compliant with mass conservation

    Dynamical Systems in Spiking Neuromorphic Hardware

    Get PDF
    Dynamical systems are universal computers. They can perceive stimuli, remember, learn from feedback, plan sequences of actions, and coordinate complex behavioural responses. The Neural Engineering Framework (NEF) provides a general recipe to formulate models of such systems as coupled sets of nonlinear differential equations and compile them onto recurrently connected spiking neural networks – akin to a programming language for spiking models of computation. The Nengo software ecosystem supports the NEF and compiles such models onto neuromorphic hardware. In this thesis, we analyze the theory driving the success of the NEF, and expose several core principles underpinning its correctness, scalability, completeness, robustness, and extensibility. We also derive novel theoretical extensions to the framework that enable it to far more effectively leverage a wide variety of dynamics in digital hardware, and to exploit the device-level physics in analog hardware. At the same time, we propose a novel set of spiking algorithms that recruit an optimal nonlinear encoding of time, which we call the Delay Network (DN). Backpropagation across stacked layers of DNs dramatically outperforms stacked Long Short-Term Memory (LSTM) networks—a state-of-the-art deep recurrent architecture—in accuracy and training time, on a continuous-time memory task, and a chaotic time-series prediction benchmark. The basic component of this network is shown to function on state-of-the-art spiking neuromorphic hardware including Braindrop and Loihi. This implementation approaches the energy-efficiency of the human brain in the former case, and the precision of conventional computation in the latter case
    • …
    corecore