22 research outputs found

    Quantum error-correcting output codes

    Get PDF
    Quantum machine learning is the aspect of quantum computing concerned with the design of algorithms capable of generalized learning from labeled training data by effectively exploiting quantum effects. Error-correcting output codes (ECOC) are a standard setting in machine learning for efficiently rendering the collective outputs of a binary classifier, such as the support vector machine, as a multi-class decision procedure. Appropriate choice of error-correcting codes further enables incorrect individual classification decisions to be effectively corrected in the composite output. In this paper, we propose an appropriate quantization of the ECOC process, based on the quantum support vector machine. We will show that, in addition to the usual benefits of quantizing machine learning, this technique leads to an exponential reduction in the number of logic gates required for effective correction of classification error

    Subclass error correcting output codes using fisher's linear discriminant ratio

    Get PDF
    Error-Correcting Output Codes (ECOC) with subclasses reveal a common way to solve multi-class classification problems. According to this approach, a multiclass problem is decomposed into several binary ones based on the maximization of the mutual information (MI) between the classes and their respective labels. The MI is modelled through the fast quadratic mutual information (FQMI) procedure. However, FQMI is not applicable on large datasets due to its high algorithmic complexity. In this paper we propose Fisher's Linear Discriminant Ratio (FLDR) as an alternative decomposition criterion which is of much less computational complexity and achieves in most experiments conducted better classification performance. Furthermore, we compare FLDR against FQMI for facial expression recognition over the Cohn-Kanade database. © 2010 IEEE

    Optimizing linear discriminant error correcting output codes using particle swarm optimization

    Get PDF
    Error Correcting Output Codes reveal an efficient strategy in dealing with multi-class classification problems. According to this technique, a multi-class problem is decomposed into several binary ones. On these created sub-problems we apply binary classifiers and then, by combining the acquired solutions, we are able to solve the initial multi-class problem. In this paper we consider the optimization of the Linear Discriminant Error Correcting Output Codes framework using Particle Swarm Optimization. In particular, we apply the Particle Swarm Optimization algorithm in order to optimally select the free parameters that control the split of the initial problem's classes into sub-classes. Moreover, by using the Support Vector Machine as classifier we can additionally apply the Particle Swarm Optimization algorithm to tune its free parameters. Our experimental results show that by applying Particle Swarm Optimization on the Sub-class Linear Discriminant Error Correcting Output Codes framework we get a significant improvement in the classification performance. © 2011 Springer-Verlag

    Beyond One-hot Encoding: lower dimensional target embedding

    Full text link
    Target encoding plays a central role when learning Convolutional Neural Networks. In this realm, One-hot encoding is the most prevalent strategy due to its simplicity. However, this so widespread encoding schema assumes a flat label space, thus ignoring rich relationships existing among labels that can be exploited during training. In large-scale datasets, data does not span the full label space, but instead lies in a low-dimensional output manifold. Following this observation, we embed the targets into a low-dimensional space, drastically improving convergence speed while preserving accuracy. Our contribution is two fold: (i) We show that random projections of the label space are a valid tool to find such lower dimensional embeddings, boosting dramatically convergence rates at zero computational cost; and (ii) we propose a normalized eigenrepresentation of the class manifold that encodes the targets with minimal information loss, improving the accuracy of random projections encoding while enjoying the same convergence rates. Experiments on CIFAR-100, CUB200-2011, Imagenet, and MIT Places demonstrate that the proposed approach drastically improves convergence speed while reaching very competitive accuracy rates.Comment: Published at Image and Vision Computin

    Error-correcting codes and applications to large scale classification systems

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 37-39).In this thesis, we study the performance of distributed output coding (DOC) and error-Correcting output coding (ECOC) as potential methods for expanding the class of tractable machine-learning problems. Using distributed output coding, we were able to scale a neural-network-based algorithm to handle nearly 10,000 output classes. In particular, we built a prototype OCR engine for Devanagari and Korean texts based upon distributed output coding. We found that the resulting classifiers performed better than existing algorithms, while maintaining small size. Error-correction, however, was found to be ineffective at increasing the accuracy of the ensemble. For each language, we also tested the feasibility of automatically finding a good codebook. Unfortunately, the results in this direction were primarily negative.by Jeremy Scott Hurwitz.M.Eng
    corecore