148,669 research outputs found

    A three-threshold learning rule approaches the maximal capacity of recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model has a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog

    Solvable Neural Network Model for Input-Output Associations: Optimal Recall at the Onset of Chaos

    Full text link
    In neural information processing, an input modulates neural dynamics to generate a desired output. To unravel the dynamics and underlying neural connectivity enabling such input-output association, we proposed an exactly soluble neural-network model with a connectivity matrix explicitly consisting of inputs and required outputs. An analytic form of the response upon the input is derived, whereas three distinctive types of responses including chaotic dynamics as bifurcation against input strength are obtained depending on the neural sensitivity and number of inputs. Optimal performance is achieved at the onset of chaos, and the relevance of the results to cognitive dynamics is discussed

    A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines

    Full text link
    Information in neural networks is represented as weighted connections, or synapses, between neurons. This poses a problem as the primary computational bottleneck for neural networks is the vector-matrix multiply when inputs are multiplied by the neural network weights. Conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. Additionally, synapses in biological neural networks are not binary connections, but exhibit a nonlinear response function as neurotransmitters are emitted and diffuse between neurons. Inspired by neuroscience principles, we present a digital neuromorphic architecture, the Spiking Temporal Processing Unit (STPU), capable of modeling arbitrary complex synaptic response functions without requiring additional hardware components. We consider the paradigm of spiking neurons with temporally coded information as opposed to non-spiking rate coded neurons used in most neural networks. In this paradigm we examine liquid state machines applied to speech recognition and show how a liquid state machine with temporal dynamics maps onto the STPU-demonstrating the flexibility and efficiency of the STPU for instantiating neural algorithms.Comment: 8 pages, 4 Figures, Preprint of 2017 IJCN

    Localization technique of IoT nodes using artificial neural networks (ANN)

    Get PDF
    One of the ways to improve calculations related to determining the position of a node in the Iot measurement system is to use artificial neural networks (ANN) to calculate coordinates. The method described in the article is based on the measurement of the RSSI (Received Signal Strenght Indicator), whitch value is then processed by the neural network. Hence, the proposed system works in two stages. In the first stage, RSSI coefficient samples are taken, and then the node location is determined on an ongoing basis. Coordinates anchor nodes (i.e. sensors with fixed and previously known positions) and the matrix of RSSI coefficients are used in the learning process of the neural network. Then the RSSI matrix determined for the system in which the nodes with unknown positions are located is fed into the neural network inputs. The result of the work is a system and algorithm that allows determining the location of the object without processing data separately in nodes with low computational performance

    Matrix Neural Networks

    Get PDF
    Traditional neural networks assume vectorial inputs as the network is arranged as layers of single line of computing units called neurons. This special structure requires the non-vectorial inputs such as matrices to be converted into vectors. This process can be problematic. Firstly, the spatial information among elements of the data may be lost during vectorisation. Secondly, the solution space becomes very large which demands very special treatments to the network parameters and high computational cost. To address these issues, we propose matrix neural networks (MatNet), which takes matrices directly as inputs. Each neuron senses summarised information through bilinear mapping from lower layer units in exactly the same way as the classic feed forward neural networks. Under this structure, back prorogation and gradient descent combination can be utilised to obtain network parameters e ciently. Furthermore, it can be conveniently extended for multimodal inputs. We apply MatNet to MNIST handwritten digits classi cation and image super resolution tasks to show its e ectiveness. Without too much tweaking MatNet achieves comparable performance as the state-of-the-art methods in both tasks with considerably reduced complexity

    Multi-Objective Evolutionary Neural Network to Predict Graduation Success at the United States Military Academy

    Get PDF
    This paper presents an evolutionary neural network approach to classify student graduation status based upon selected academic, demographic, and other indicators. A pareto-based, multi-objective evolutionary algorithm utilizing the Strength Pareto Evolutionary Algorithm (SPEA2) fitness evaluation scheme simultaneously evolves connection weights and identifies the neural network topology using network complexity and classification accuracy as objective functions. A combined vector-matrix representation scheme and differential evolution recombination operators are employed. The model is trained, tested, and validated using 5100 student samples with data compiled from admissions records and institutional research databases. The inputs to the evolutionary neural network model are used to classify students as: graduates, late graduates, or non-graduates. Results of the hybrid method show higher mean classification rates (88%) than the current methodology (80%) with a potential savings of $130M. Additionally, the proposed method is more efficient in that a less complex neural network topology is identified by the algorithm
    • …
    corecore