613 research outputs found

    Nearly extensive sequential memory lifetime achieved by coupled nonlinear neurons

    Full text link
    Many cognitive processes rely on the ability of the brain to hold sequences of events in short-term memory. Recent studies have revealed that such memory can be read out from the transient dynamics of a network of neurons. However, the memory performance of such a network in buffering past information has only been rigorously estimated in networks of linear neurons. When signal gain is kept low, so that neurons operate primarily in the linear part of their response nonlinearity, the memory lifetime is bounded by the square root of the network size. In this work, I demonstrate that it is possible to achieve a memory lifetime almost proportional to the network size, "an extensive memory lifetime", when the nonlinearity of neurons is appropriately utilized. The analysis of neural activity revealed that nonlinear dynamics prevented the accumulation of noise by partially removing noise in each time step. With this error-correcting mechanism, I demonstrate that a memory lifetime of order N/logNN/\log N can be achieved.Comment: 21 pages, 5 figures, the manuscript has been accepted for publication in Neural Computatio

    A Comprehensive Workflow for General-Purpose Neural Modeling with Highly Configurable Neuromorphic Hardware Systems

    Full text link
    In this paper we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware-experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results

    Talking Nets: A Multi-Agent Connectionist Approach to Communication and Trust between Individuals

    Get PDF
    A multi-agent connectionist model is proposed that consists of a collection of individual recurrent networks that communicate with each other, and as such is a network of networks. The individual recurrent networks simulate the process of information uptake, integration and memorization within individual agents, while the communication of beliefs and opinions between agents is propagated along connections between the individual networks. A crucial aspect in belief updating based on information from other agents is the trust in the information provided. In the model, trust is determined by the consistency with the receiving agents’ existing beliefs, and results in changes of the connections between individual networks, called trust weights. Thus activation spreading and weight change between individual networks is analogous to standard connectionist processes, although trust weights take a specific function. Specifically, they lead to a selective propagation and thus filtering out of less reliable information, and they implement Grice’s (1975) maxims of quality and quantity in communication. The unique contribution of communicative mechanisms beyond intra-personal processing of individual networks was explored in simulations of key phenomena involving persuasive communication and polarization, lexical acquisition, spreading of stereotypes and rumors, and a lack of sharing unique information in group decisions

    Interpreting multi-stable behaviour in input-driven recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) are computational models inspired by the brain. Although RNNs stand out as state-of-the-art machine learning models to solve challenging tasks as speech recognition, handwriting recognition, language translation, and others, they are plagued by the so-called vanishing/exploding gradient issue. This prevents us from training RNNs with the aim of learning long term dependencies in sequential data. Moreover, a problem of interpretability affects these models, known as the ``black-box issue'' of RNNs. We attempt to open the black box by developing a mechanistic interpretation of errors occurring during the computation. We do this from a dynamical system theory perspective, specifically building on the notion of Excitable Network Attractors. Our methodology is effective at least for those tasks where a number of attractors and a switching pattern between them must be learned. RNNs can be seen as massively large nonlinear dynamical systems driven by external inputs. When it comes to analytically investigate RNNs, often in the literature the input-driven property is neglected or dropped in favour of tight constraints on the input driving the dynamics, which do not match the reality of RNN applications. Trying to bridge this gap, we framed RNNs dynamics driven by generic input sequences in the context of nonautonomous dynamical system theory. This brought us to enquire deeply into a fundamental principle established for RNNs known as the echo state property (ESP). In particular, we argue that input-driven RNNs can be reliable computational models even without satisfying the classical ESP formulation. We prove a sort of input-driven fixed point theorem and exploit it to (i) demonstrate the existence and uniqueness of a global attracting solution for strongly (in amplitude) input-driven RNNs, (ii) deduce the existence of multiple responses for certain input signals which can be reliably exploited for computational purposes, and (iii) study the stability of attracting solutions w.r.t. input sequences. Finally, we highlight the active role of the input in determining qualitative changes in the RNN dynamics, e.g. the number of stable responses, in contrast to commonly known qualitative changes due to variations of model parameters

    Function approximation in high-dimensional spaces using lower-dimensional Gaussian RBF networks.

    Get PDF
    by Jones Chui.Thesis (M.Phil.)--Chinese University of Hong Kong, 1992.Includes bibliographical references (leaves 62-[66]).Chapter 1 --- Introduction --- p.1Chapter 1.1 --- Fundamentals of Artificial Neural Networks --- p.2Chapter 1.1.1 --- Processing Unit --- p.2Chapter 1.1.2 --- Topology --- p.3Chapter 1.1.3 --- Learning Rules --- p.4Chapter 1.2 --- Overview of Various Neural Network Models --- p.6Chapter 1.3 --- Introduction to the Radial Basis Function Networks (RBFs) --- p.8Chapter 1.3.1 --- Historical Development --- p.9Chapter 1.3.2 --- Some Intrinsic Problems --- p.9Chapter 1.4 --- Objective of the Thesis --- p.10Chapter 2 --- Low-dimensional Gaussian RBF networks (LowD RBFs) --- p.13Chapter 2.1 --- Architecture of LowD RBF Networks --- p.13Chapter 2.1.1 --- Network Structure --- p.13Chapter 2.1.2 --- Learning Rules --- p.17Chapter 2.2 --- Construction of LowD RBF Networks --- p.19Chapter 2.2.1 --- Growing Heuristic --- p.19Chapter 2.2.2 --- Pruning Heuristic --- p.27Chapter 2.2.3 --- Summary --- p.31Chapter 3 --- Application examples --- p.34Chapter 3.1 --- Chaotic Time Series Prediction --- p.35Chapter 3.1.1 --- Performance Comparison --- p.39Chapter 3.1.2 --- Sensitivity Analysis of MSE THRESHOLDS --- p.41Chapter 3.1.3 --- Effects of Increased Embedding Dimension --- p.41Chapter 3.1.4 --- Comparison with Tree-Structured Network --- p.46Chapter 3.1.5 --- Overfitting Problem --- p.46Chapter 3.2 --- Nonlinear prediction of speech signal --- p.49Chapter 3.2.1 --- Comparison with Linear Predictive Coding (LPC) --- p.54Chapter 3.2.2 --- Performance Test in Noisy Conditions --- p.55Chapter 3.2.3 --- Iterated Prediction of Speech --- p.59Chapter 4 --- Conclusion --- p.60Chapter 4.1 --- Discussions --- p.60Chapter 4.2 --- Limitations and Suggestions for Further Research --- p.61Bibliography --- p.6
    corecore