29 research outputs found

    Threshold-induced phase transitions in perceptrons

    Get PDF
    Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the weight vector are calculated for storing patterns from biased input and output distributions derived within a one-step replica symmetry breaking (RSB) treatment. For unbiased output distribution and non-zero stability of the patterns, we find a critical load, Ξ± p, above which two solutions to the saddlepoint equations appear; one with higher free energy and zero threshold and a dominant solution with non-zero threshold. We examine this second-order phase transition and the dependence of Ξ± p on the required pattern stability, ΞΊ, for both one-step RSB and replica symmetry (RS) in the spherical case and for one-step RSB in the Ising case

    Temporal context and conditional associative learning

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We investigated how temporal context affects the learning of arbitrary visuo-motor associations. Human observers viewed highly distinguishable, fractal objects and learned to choose for each object the one motor response (of four) that was rewarded. Some objects were consistently preceded by specific other objects, while other objects lacked this task-irrelevant but predictive context.</p> <p>Results</p> <p>The results of five experiments showed that predictive context consistently and significantly accelerated associative learning. A simple model of reinforcement learning, in which three successive objects informed response selection, reproduced our behavioral results.</p> <p>Conclusions</p> <p>Our results imply that not just the representation of a current event, but also the representations of past events, are reinforced during conditional associative learning. In addition, these findings are broadly consistent with the prediction of attractor network models of associative learning and their prophecy of a persistent representation of past objects.</p

    Optimisation in neural networks

    No full text
    SIGLEAvailable from British Library Document Supply Centre- DSC:D181916 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    LETTER TO THE EDITOR ’Ikaining of optimal cluster separation networks

    No full text
    Abstract. Finding the optimal separation of two clusters of normalized vectors corresponds to Wining Ulresholds and weights in a neural network of maximum stability. In order to achieve this, two I d iterative alpriihms are presented which Weat threshold and weights all in one, avoiding the need to calculate any intermediate β€˜test ’ quantities. Convergence is proved. and the separationlstability obtained is shown to match theoretical predictions and to be superior to existing algorithms. Linear separation of two clusters of vectors has been studied intensively. In particular, in the context of neural networks, supervised learning algorithms have been presented which aim to produce a positive gap size or stability between the two clusters of output. Initially, the Hebb rule leads to the well understood Hopfield model. This model has been shown to yield low or even negative stability of the embedding of patterns (Hertz et ul 1991). Therefore, algorithms have been proposed which overcome this drawback. The Adaline algorithm will always yield positive stability for loading capacities U less than one (Diedeich and Opper 1987, Widrow and Hoff 1960). The minover (Krauth and Mezard 1987) and AdaTmn (Anlauf and Biebl 1989) algorithms give the optimal (i.e. best obtainable) stability fo

    Abstract Tuning Hidden Markov Model for Speech Emotion Recognition

    No full text
    In this article we introduce a speech emotion recognitio

    Fast learning of biased patterns in neural networks.

    No full text
    Usual neural network gradient descent training algorithms require training times of the same order as the number of neurons N if the patterns are biased. In this paper, modified algorithms are presented which require training times equal to those in unbiased cases which are of order 1. Exact convergence proofs are given. Gain parameters which produce minimal learning times in large networks are computed by replica methods. It is demonstrated how these modified algorithms are applied in order to produce four types of solutions to the learning problem: 1. A solution with all internal fields equal to the desired output, 2. The Adaline (or pseudo-inverse) solution, 3. The perceptron of optimal stability without threshold and 4. The perceptron of optimal stability with threshold

    Advances in Confidence Measures for Large Vocabulary

    No full text
    This paper adresses the correct choice and combination of confidence measures in large vocabulary speech recognition tasks. We classify single words within continuous as well as large vocabulary utterances into two categories: utterances within the vocabulary which are recognized correctly, and other utterances, namely misrecognized utterances or (less frequent) out-of-vocabulary (OOV)
    corecore