3,524 research outputs found
Statistical physics of neural systems with non-additive dendritic coupling
How neurons process their inputs crucially determines the dynamics of
biological and artificial neural networks. In such neural and neural-like
systems, synaptic input is typically considered to be merely transmitted
linearly or sublinearly by the dendritic compartments. Yet, single-neuron
experiments report pronounced supralinear dendritic summation of sufficiently
synchronous and spatially close-by inputs. Here, we provide a statistical
physics approach to study the impact of such non-additive dendritic processing
on single neuron responses and the performance of associative memory tasks in
artificial neural networks. First, we compute the effect of random input to a
neuron incorporating nonlinear dendrites. This approach is independent of the
details of the neuronal dynamics. Second, we use those results to study the
impact of dendritic nonlinearities on the network dynamics in a paradigmatic
model for associative memory, both numerically and analytically. We find that
dendritic nonlinearities maintain network convergence and increase the
robustness of memory performance against noise. Interestingly, an intermediate
number of dendritic branches is optimal for memory functionality
Nonlinearities in Artificial Neural Systems Interpreted as an Application of Ising Physics
In this review, the nonlinearities in different processes such as spin glasses, finite field models, Hamiltonian functions, learning and storing capabilities, mean field systems and others in the area of physics related to the artificial neural networks namely the main brain structure interpreted as Ising spin systems are discussed. It is shown that nonlinearities serve as exclusive role in the applied physics field
Observer techniques for estimating the state-of-charge and state-of-health of VRLABs for hybrid electric vehicles
The paper describes the application of observer-based state-estimation techniques for the real-time prediction of state-of-charge (SoC) and state-of-health (SoH) of lead-acid cells. Specifically, an approach based on the well-known Kalman filter, is employed, to estimate SoC, and the subsequent use of the EKF to accommodate model non-linearities to predict battery SoH. The underlying dynamic behaviour of each cell is based on a generic Randles' equivalent circuit comprising of two-capacitors (bulk and surface) and three resistors, (terminal, transfer and self-discharging). The presented techniques are shown to correct for offset, drift and long-term state divergence-an unfortunate feature of employing stand-alone models and more traditional coulomb-counting techniques. Measurements using real-time road data are used to compare the performance of conventional integration-based methods for estimating SoC, with those predicted from the presented state estimation schemes. Results show that the proposed methodologies are superior with SoC being estimated to be within 1% of measured. Moreover, by accounting for the nonlinearities present within the dynamic cell model, the application of an EKF is shown to provide verifiable indications of SoH of the cell pack
Linear and logarithmic capacities in associative neural networks
A model of associate memory incorporating global linearity and pointwise nonlinearities in a state space of n-dimensional binary vectors is considered. Attention is focused on the ability to store a prescribed set of state vectors as attractors within the model. Within the framework of such associative nets, a specific strategy for information storage that utilizes the spectrum of a linear operator is considered in some detail. Comparisons are made between this spectral strategy and a prior scheme that utilizes the sum of Kronecker outer products of the prescribed set of state vectors, which are to function nominally as memories. The storage capacity of the spectral strategy is linear in n (the dimension of the state space under consideration), whereas an asymptotic result of n/4 log n holds for the storage capacity of the outer product scheme. Computer-simulated results show that the spectral strategy stores information more efficiently. The preprocessing costs incurred in the two algorithms are estimated, and recursive strategies are developed for their computation
Hardware-Amenable Structural Learning for Spike-based Pattern Classification using a Simple Model of Active Dendrites
This paper presents a spike-based model which employs neurons with
functionally distinct dendritic compartments for classifying high dimensional
binary patterns. The synaptic inputs arriving on each dendritic subunit are
nonlinearly processed before being linearly integrated at the soma, giving the
neuron a capacity to perform a large number of input-output mappings. The model
utilizes sparse synaptic connectivity; where each synapse takes a binary value.
The optimal connection pattern of a neuron is learned by using a simple
hardware-friendly, margin enhancing learning algorithm inspired by the
mechanism of structural plasticity in biological neurons. The learning
algorithm groups correlated synaptic inputs on the same dendritic branch. Since
the learning results in modified connection patterns, it can be incorporated
into current event-based neuromorphic systems with little overhead. This work
also presents a branch-specific spike-based version of this structural
plasticity rule. The proposed model is evaluated on benchmark binary
classification problems and its performance is compared against that achieved
using Support Vector Machine (SVM) and Extreme Learning Machine (ELM)
techniques. Our proposed method attains comparable performance while utilizing
10 to 50% less computational resources than the other reported techniques.Comment: Accepted for publication in Neural Computatio
- …