20 research outputs found

    Optimal coloured perceptrons

    Full text link
    Ashkin-Teller type perceptron models are introduced. Their maximal capacity per number of couplings is calculated within a first-step replica-symmetry-breaking Gardner approach. The results are compared with extensive numerical simulations using several algorithms.Comment: 8 pages in Latex with 2 eps figures, RSB1 calculations has been adde

    The AdaTron: an Adaptive Perceptron Algorithm

    No full text
    A new learning algorithm for neural networks of spin glass type is proposed. It is found to relax exponentially towards the perceptron of optimal stability using the concept of adaptive learning. The patterns can be presented either sequentially or in parallel. A prove of convergence is given and the method's performance is studied numerically.

    Perceptron learning by constrained optimization: the AdaTron algorithm

    No full text

    Ultraconservative Online Algorithms for Multiclass Problems

    No full text
    In this paper we study online classification algorithms for multiclass problems in the mistake bound model. The hypotheses we use maintain one prototype vector per class. Given an input instance, a multiclass hypothesis computes a similarity-score between each prototype and the input instance and then sets the predicted label to be the index of the prototype achieving the highest similarity. To design and analyze the learning algorithms in this paper we introduce the notion of ultraconservativeness. Ultraconservative algorithms are algorithms that update only the prototypes attaining similarity-scores which are higher than the score of the correct label's prototype. We start by describing a family of additive ultraconservative algorithms where each algorithm in the family updates its prototypes by finding a feasible solution for a set of linear constraints that depend on the instantaneous similarity-scores. We then discuss a specific online algorithm that seeks a set of prototypes which have a small norm. The resulting algorithm, which we term MIRA (for Margin Infused Relaxed Algorithm) is ultraconservative as well. We derive mistake bounds for all the algorithms and provide further analysis of MIRA using a generalized notion of the margin for multiclass problems

    Analysis of Generic Perceptron-Like Large Margin Classifiers

    No full text
    corecore