14 research outputs found

    Storage capacity of correlated perceptrons

    Full text link
    We consider an ensemble of KK single-layer perceptrons exposed to random inputs and investigate the conditions under which the couplings of these perceptrons can be chosen such that prescribed correlations between the outputs occur. A general formalism is introduced using a multi-perceptron costfunction that allows to determine the maximal number of random inputs as a function of the desired values of the correlations. Replica-symmetric results for K=2K=2 and K=3K=3 are compared with properties of two-layer networks of tree-structure and fixed Boolean function between hidden units and output. The results show which correlations in the hidden layer of multi-layer neural networks are crucial for the value of the storage capacity.Comment: 16 pages, Latex2

    Localization and Mobility Edge in One-Dimensional Potentials with Correlated Disorder

    Full text link
    We show that a mobility edge exists in 1D random potentials provided specific long-range correlations. Our approach is based on the relation between binary correlator of a site potential and the localization length. We give the algorithm to construct numerically potentials with mobility edge at any given energy inside allowed zone. Another natural way to generate such potentials is to use chaotic trajectories of non-linear maps. Our numerical calculations for few particular potentials demonstrate the presence of mobility edges in 1D geometry.Comment: 4 pages in RevTex and 2 Postscript figures; revised version published in Phys. Rev. Lett. 82 (1999) 406

    Storage capacity of a constructive learning algorithm

    Full text link
    Upper and lower bounds for the typical storage capacity of a constructive algorithm, the Tilinglike Learning Algorithm for the Parity Machine [M. Biehl and M. Opper, Phys. Rev. A {\bf 44} 6888 (1991)], are determined in the asymptotic limit of large training set sizes. The properties of a perceptron with threshold, learning a training set of patterns having a biased distribution of targets, needed as an intermediate step in the capacity calculation, are determined analytically. The lower bound for the capacity, determined with a cavity method, is proportional to the number of hidden units. The upper bound, obtained with the hypothesis of replica symmetry, is close to the one predicted by Mitchinson and Durbin [Biol. Cyber. {\bf 60} 345 (1989)].Comment: 13 pages, 1 figur

    Synapse efficiency diverges due to synaptic pruning following over-growth

    Full text link
    In the development of the brain, it is known that synapses are pruned following over-growth. This pruning following over-growth seems to be a universal phenomenon that occurs in almost all areas -- visual cortex, motor area, association area, and so on. It has been shown numerically that the synapse efficiency is increased by systematic deletion. We discuss the synapse efficiency to evaluate the effect of pruning following over-growth, and analytically show that the synapse efficiency diverges as O(log c) at the limit where connecting rate c is extremely small. Under a fixed synapse number criterion, the optimal connecting rate, which maximize memory performance, exists.Comment: 15 pages, 16 figure

    Antiresonance and Localization in Quantum Dynamics

    Full text link
    The phenomenon of quantum antiresonance (QAR), i.e., exactly periodic recurrences in quantum dynamics, is studied in a large class of nonintegrable systems, the modulated kicked rotors (MKRs). It is shown that asymptotic exponential localization generally occurs for η\eta (a scaled \hbar) in the infinitesimal vicinity of QAR points η0\eta_0. The localization length ξ0\xi_0 is determined from the analytical properties of the kicking potential. This ``QAR-localization" is associated in some cases with an integrable limit of the corresponding classical systems. The MKR dynamical problem is mapped into pseudorandom tight-binding models, exhibiting dynamical localization (DL). By considering exactly-solvable cases, numerical evidence is given that QAR-localization is an excellent approximation to DL sufficiently close to QAR. The transition from QAR-localization to DL in a semiclassical regime, as η\eta is varied, is studied. It is shown that this transition takes place via a gradual reduction of the influence of the analyticity of the potential on the analyticity of the eigenstates, as the level of chaos is increased.Comment: To appear in Physical Review E. 51 pre-print pages + 9 postscript figure

    Spike-Based Bayesian-Hebbian Learning of Temporal Sequences

    Get PDF
    Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model's feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison
    corecore