5,972 research outputs found

    Connectivity Influences on Nonlinear Dynamics in Weakly-Synchronized Networks: Insights from Rössler Systems, Electronic Chaotic Oscillators, Model and Biological Neurons

    Get PDF
    Natural and engineered networks, such as interconnected neurons, ecological and social networks, coupled oscillators, wireless terminals and power loads, are characterized by an appreciable heterogeneity in the local connectivity around each node. For instance, in both elementary structures such as stars and complex graphs having scale-free topology, a minority of elements are linked to the rest of the network disproportionately strongly. While the effect of the arrangement of structural connections on the emergent synchronization pattern has been studied extensively, considerably less is known about its influence on the temporal dynamics unfolding within each node. Here, we present a comprehensive investigation across diverse simulated and experimental systems, encompassing star and complex networks of Rössler systems, coupled hysteresis-based electronic oscillators, microcircuits of leaky integrate-and-fire model neurons, and finally recordings from in-vitro cultures of spontaneously-growing neuronal networks. We systematically consider a range of dynamical measures, including the correlation dimension, nonlinear prediction error, permutation entropy, and other information-theoretical indices. The empirical evidence gathered reveals that under situations of weak synchronization, wherein rather than a collective behavior one observes significantly differentiated dynamics, denser connectivity tends to locally promote the emergence of stronger signatures of nonlinear dynamics. In deterministic systems, transition to chaos and generation of higher-dimensional signals were observed; however, when the coupling is stronger, this relationship may be lost or even inverted. In systems with a strong stochastic component, the generation of more temporally-organized activity could be induced. These observations have many potential implications across diverse fields of basic and applied science, for example, in the design of distributed sensing systems based on wireless coupled oscillators, in network identification and control, as well as in the interpretation of neuroscientific and other dynamical data

    MorphIC: A 65-nm 738k-Synapse/mm2^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

    Full text link
    Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm2^2 in 65nm CMOS, achieving a high density of 738k synapses/mm2^2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.Comment: This document is the paper as accepted for publication in the IEEE Transactions on Biomedical Circuits and Systems journal (2019), the fully-edited paper is available at https://ieeexplore.ieee.org/document/876400

    Development of a generic activities model of command and control

    Get PDF
    This paper reports on five different models of command and control. Four different models are reviewed: a process model, a contextual control model, a decision ladder model and a functional model. Further to this, command and control activities are analysed in three distinct domains: armed forces, emergency services and civilian services. From this analysis, taxonomies of command and control activities are developed that give rise to an activities model of command and control. This model will be used to guide further research into technological support of command and control activities

    Computational study of resting state network dynamics

    Get PDF
    Lo scopo di questa tesi è quello di mostrare, attraverso una simulazione con il software The Virtual Brain, le più importanti proprietà della dinamica cerebrale durante il resting state, ovvero quando non si è coinvolti in nessun compito preciso e non si è sottoposti a nessuno stimolo particolare. Si comincia con lo spiegare cos’è il resting state attraverso una breve revisione storica della sua scoperta, quindi si passano in rassegna alcuni metodi sperimentali utilizzati nell’analisi dell’attività cerebrale, per poi evidenziare la differenza tra connettività strutturale e funzionale. In seguito, si riassumono brevemente i concetti dei sistemi dinamici, teoria indispensabile per capire un sistema complesso come il cervello. Nel capitolo successivo, attraverso un approccio ‘bottom-up’, si illustrano sotto il profilo biologico le principali strutture del sistema nervoso, dal neurone alla corteccia cerebrale. Tutto ciò viene spiegato anche dal punto di vista dei sistemi dinamici, illustrando il pionieristico modello di Hodgkin-Huxley e poi il concetto di dinamica di popolazione. Dopo questa prima parte preliminare si entra nel dettaglio della simulazione. Prima di tutto si danno maggiori informazioni sul software The Virtual Brain, si definisce il modello di network del resting state utilizzato nella simulazione e si descrive il ‘connettoma’ adoperato. Successivamente vengono mostrati i risultati dell’analisi svolta sui dati ricavati, dai quali si mostra come la criticità e il rumore svolgano un ruolo chiave nell'emergenza di questa attività di fondo del cervello. Questi risultati vengono poi confrontati con le più importanti e recenti ricerche in questo ambito, le quali confermano i risultati del nostro lavoro. Infine, si riportano brevemente le conseguenze che porterebbe in campo medico e clinico una piena comprensione del fenomeno del resting state e la possibilità di virtualizzare l’attività cerebrale

    Algorithm Hardware Codesign for High Performance Neuromorphic Computing

    Get PDF
    Driven by the massive application of Internet of Things (IoT), embedded system and Cyber Physical System (CPS) etc., there is an increasing demand to apply machine intelligence on these power limited scenarios. Though deep learning has achieved impressive performance on various realistic and practical tasks such as anomaly detection, pattern recognition, machine vision etc., the ever-increasing computational complexity and model size of Deep Neural Networks (DNN) make it challenging to deploy them onto aforementioned scenarios where computation, memory and energy resource are all limited. Early studies show that biological systems\u27 energy efficiency can be orders of magnitude higher than that of digital systems. Hence taking inspiration from biological systems, neuromorphic computing and Spiking Neural Network (SNN) have drawn attention as alternative solutions for energy-efficient machine intelligence. Though believed promising, neuromorphic computing are hardly used for real world applications. A major problem is that the performance of SNN is limited compared with DNNs due to the lack of efficient training algorithm. In SNN, neuron\u27s output is spike, which is represented by Dirac Delta function mathematically. Becauase of the non-differentiable nature of spike, gradient descent cannot be directly used to train SNN. Hence algorithm level innovation is desirable. Next, as an emerging computing paradigm, hardware and architecture level innovation is also required to support new algorithms and to explore the potential of neuromorphic computing. In this work, we present a comprehensive algorithm-hardware codesign for neuromorphic computing. On the algorithm side, we address the training difficulty. We first derive a flexible SNN model that retains critical neural dynamics, and then develop algorithm to train SNN to learn temporal patterns. Next, we apply proposed algorithm to multivariate time series classification tasks to demonstrate its advantages. On hardware level, we develop a systematic solution on FPGA that is optimized for proposed SNN model to enable high performance inference. In addition, we also explore emerging devices, a memristor-based neuromorphic design is proposed. We carry out a neuron and synapse circuit which can replicate the important neural dynamics such as filter effect and adaptive threshold

    Developments of serious games in education

    Get PDF
    As Human Computer Interaction technologies evolve, they are supporting the generation of innovative solutions in a broad range of domains. Among them, Serious Games are defined as new type of computer game that is capable of stimulating users to learn, by playing and competing against themselves, against other users or against a computer application. While it could be applied to a broad range of fields and ages, these games are becoming especially relevant in educational contexts and for the most recent generation of students that is growing in a new technological environment, very different from the one we had some years ago. However, in order to become fully accepted as a teaching/learning tool in both formal and informal contexts, this technology has still to overcome several challenges. Given these considerations, this chapter makes a state-of-the-art review of several works that were done in this field, followed by the description of two real world projects, helping to understand the applicability of this technology, but also its inherent challenges.info:eu-repo/semantics/publishedVersio

    Part 3: Systemic risk in ecology and engineering

    Get PDF
    The Federal Reserve Bank of New York released a report -- New Directions for Understanding Systemic Risk -- that presents key findings from a cross-disciplinary conference that it cosponsored in May 2006 with the National Academy of Sciences' Board on Mathematical Sciences and Their Applications. ; The pace of financial innovation over the past decade has increased the complexity and interconnectedness of the financial system. This development is important to central banks, such as the Federal Reserve, because of their traditional role in addressing systemic risks to the financial system. ; To encourage innovative thinking about systemic issues, the New York Fed partnered with the National Academy of Sciences to bring together more than 100 experts on systemic risk from 22 countries to compare cross-disciplinary perspectives on monitoring, addressing and preventing this type of risk. ; This report, released as part of the Bank's Economic Policy Review series, outlines some of the key points concerning systemic risk made by the various disciplines represented - including economic research, ecology, physics and engineering - as well as presentations on market-oriented models of financial crises, and systemic risk in the payments system and the interbank funds market. The report concludes with observations gathered from the sessions and a discussion of potential applications to policy. ; The three papers presented in this conference session highlighted the positive feedback effects that produce herdlike behavior in markets, and the subsequent discussion focused in part on means of encouraging heterogeneous investment strategies to counter such behavior. Participants in the session also discussed the types of models used to study systemic risk and commented on the challenges and trade-offs researchers face in developing their models.Financial risk management ; Financial markets ; Financial stability ; Financial crises

    Learning to process with spikes and to localise pulses

    Get PDF
    In the last few decades, deep learning with artificial neural networks (ANNs) has emerged as one of the most widely used techniques in tasks such as classification and regression, achieving competitive results and in some cases even surpassing human-level performance. Nonetheless, as ANN architectures are optimised towards empirical results and departed from their biological precursors, how exactly human brains process information using these short electrical pulses called spikes remains a mystery. Hence, in this thesis, we explore the problem of learning to process with spikes and to localise pulses. We first consider spiking neural networks (SNNs), a type of ANN that more closely mimic biological neural networks in that neurons communicate with one another using spikes. This unique architecture allows us to look into the role of heterogeneity in learning. Since it is conjectured that the information is encoded by the timing of spikes, we are particularly interested in the heterogeneity of time constants of neurons. We then trained SNNs for classification tasks on a range of visual and auditory neuromorphic datasets, which contain streams of events (spike times) instead of the conventional frame-based data, and show that the overall performance is improved by allowing the neurons to have different time constants, especially on tasks with richer temporal structure. We also find that the learned time constants are distributed similarly to those experimentally observed in some mammalian cells. Besides, we demonstrate that learning with heterogeneity improves robustness against hyperparameter mistuning. These results suggest that heterogeneity may be more than the byproduct of noisy processes and perhaps serves a key role in learning in changing environments, yet heterogeneity has been overlooked in basic artificial models. While neuromorphic datasets, which are often captured by neuromorphic devices that closely model the corresponding biological systems, have enabled us to explore the more biologically plausible SNNs, there still exists a gap in understanding how spike times encode information in actual biological neural networks like human brains, as such data is difficult to acquire due to the trade-off between the timing precision and the number of cells simultaneously recorded electrically. Instead, what we usually obtain is the low-rate discrete samples of trains of filtered spikes. Hence, in the second part of the thesis, we focus on a different type of problem involving pulses, that is to retrieve the precise pulse locations from these low-rate samples. We make use of the finite rate of innovation (FRI) sampling theory, which states that perfect reconstruction is possible for classes of continuous non-bandlimited signals that have a small number of free parameters. However, existing FRI methods break down under very noisy conditions due to the so-called subspace swap event. Thus, we present two novel model-based learning architectures: Deep Unfolded Projected Wirtinger Gradient Descent (Deep Unfolded PWGD) and FRI Encoder-Decoder Network (FRIED-Net). The former is based on the existing iterative denoising algorithm for subspace-based methods, while the latter models directly the relationship between the samples and the locations of the pulses using an autoencoder-like network. Using a stream of K Diracs as an example, we show that both algorithms are able to overcome the breakdown inherent in the existing subspace-based methods. Moreover, we extend our FRIED-Net framework beyond conventional FRI methods by considering when the shape is unknown. We show that the pulse shape can be learned using backpropagation. This coincides with the application of spike detection from real-world calcium imaging data, where we achieve competitive results. Finally, we explore beyond canonical FRI signals and demonstrate that FRIED-Net is able to reconstruct streams of pulses with different shapes.Open Acces
    corecore