2,731 research outputs found

    Correlated patterns in non-monotonic graded-response perceptrons

    Full text link
    The optimal capacity of graded-response perceptrons storing biased and spatially correlated patterns with non-monotonic input-output relations is studied. It is shown that only the structure of the output patterns is important for the overall performance of the perceptrons.Comment: 4 pages, 4 figure

    A polynomial training algorithm for calculating perceptrons of optimal stability

    Full text link
    Recomi (REpeated COrrelation Matrix Inversion) is a polynomially fast algorithm for searching optimally stable solutions of the perceptron learning problem. For random unbiased and biased patterns it is shown that the algorithm is able to find optimal solutions, if any exist, in at worst O(N^4) floating point operations. Even beyond the critical storage capacity alpha_c the algorithm is able to find locally stable solutions (with negative stability) at the same speed. There are no divergent time scales in the learning process. A full proof of convergence cannot yet be given, only major constituents of a proof are shown.Comment: 11 pages, Latex, 4 EPS figure

    Replica Symmetry Breaking and the Kuhn-Tucker Cavity Method in simple and multilayer Perceptrons

    Full text link
    Within a Kuhn-Tucker cavity method introduced in a former paper, we study optimal stability learning for situations, where in the replica formalism the replica symmetry may be broken, namely (i) the case of a simple perceptron above the critical loading, and (ii) the case of two-layer AND-perceptrons, if one learns with maximal stability. We find that the deviation of our cavity solution from the replica symmetric one in these cases is a clear indication of the necessity of replica symmetry breaking. In any case the cavity solution tends to underestimate the storage capabilities of the networks.Comment: 32 pages, LaTex Source with 9 .eps-files enclosed, accepted by J. Phys I (France

    Generalizing with perceptrons in case of structured phase- and pattern-spaces

    Full text link
    We investigate the influence of different kinds of structure on the learning behaviour of a perceptron performing a classification task defined by a teacher rule. The underlying pattern distribution is permitted to have spatial correlations. The prior distribution for the teacher coupling vectors itself is assumed to be nonuniform. Thus classification tasks of quite different difficulty are included. As learning algorithms we discuss Hebbian learning, Gibbs learning, and Bayesian learning with different priors, using methods from statistics and the replica formalism. We find that the Hebb rule is quite sensitive to the structure of the actual learning problem, failing asymptotically in most cases. Contrarily, the behaviour of the more sophisticated methods of Gibbs and Bayes learning is influenced by the spatial correlations only in an intermediate regime of α\alpha, where α\alpha specifies the size of the training set. Concerning the Bayesian case we show, how enhanced prior knowledge improves the performance.Comment: LaTeX, 32 pages with eps-figs, accepted by J Phys

    A two step algorithm for learning from unspecific reinforcement

    Get PDF
    We study a simple learning model based on the Hebb rule to cope with "delayed", unspecific reinforcement. In spite of the unspecific nature of the information-feedback, convergence to asymptotically perfect generalization is observed, with a rate depending, however, in a non- universal way on learning parameters. Asymptotic convergence can be as fast as that of Hebbian learning, but may be slower. Moreover, for a certain range of parameter settings, it depends on initial conditions whether the system can reach the regime of asymptotically perfect generalization, or rather approaches a stationary state of poor generalization.Comment: 13 pages LaTeX, 4 figures, note on biologically motivated stochastic variant of the algorithm adde

    A three-threshold learning rule approaches the maximal capacity of recurrent neural networks

    Get PDF
    Understanding the theoretical foundations of how memories are encoded and retrieved in neural populations is a central challenge in neuroscience. A popular theoretical scenario for modeling memory function is the attractor neural network scenario, whose prototype is the Hopfield model. The model has a poor storage capacity, compared with the capacity achieved with perceptron learning algorithms. Here, by transforming the perceptron learning rule, we present an online learning rule for a recurrent neural network that achieves near-maximal storage capacity without an explicit supervisory error signal, relying only upon locally accessible information. The fully-connected network consists of excitatory binary neurons with plastic recurrent connections and non-plastic inhibitory feedback stabilizing the network dynamics; the memory patterns are presented online as strong afferent currents, producing a bimodal distribution for the neuron synaptic inputs. Synapses corresponding to active inputs are modified as a function of the value of the local fields with respect to three thresholds. Above the highest threshold, and below the lowest threshold, no plasticity occurs. In between these two thresholds, potentiation/depression occurs when the local field is above/below an intermediate threshold. We simulated and analyzed a network of binary neurons implementing this rule and measured its storage capacity for different sizes of the basins of attraction. The storage capacity obtained through numerical simulations is shown to be close to the value predicted by analytical calculations. We also measured the dependence of capacity on the strength of external inputs. Finally, we quantified the statistics of the resulting synaptic connectivity matrix, and found that both the fraction of zero weight synapses and the degree of symmetry of the weight matrix increase with the number of stored patterns.Comment: 24 pages, 10 figures, to be published in PLOS Computational Biolog

    Image denoising with multi-layer perceptrons, part 1: comparison with existing algorithms and with bounds

    Full text link
    Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with plain multi layer perceptrons (MLP) applied to image patches. We will show that by training on large image databases we are able to outperform the current state-of-the-art image denoising methods. In addition, our method achieves results that are superior to one type of theoretical bound and goes a large way toward closing the gap with a second type of theoretical bound. Our approach is easily adapted to less extensively studied types of noise, such as mixed Poisson-Gaussian noise, JPEG artifacts, salt-and-pepper noise and noise resembling stripes, for which we achieve excellent results as well. We will show that combining a block-matching procedure with MLPs can further improve the results on certain images. In a second paper, we detail the training trade-offs and the inner mechanisms of our MLPs
    • …
    corecore