69,252 research outputs found

    Advances in quantum machine learning

    Get PDF
    Here we discuss advances in the field of quantum machine learning. The following document offers a hybrid discussion; both reviewing the field as it is currently, and suggesting directions for further research. We include both algorithms and experimental implementations in the discussion. The field's outlook is generally positive, showing significant promise. However, we believe there are appreciable hurdles to overcome before one can claim that it is a primary application of quantum computation.Comment: 38 pages, 17 Figure

    Neural Class-Specific Regression for face verification

    Get PDF
    Face verification is a problem approached in the literature mainly using nonlinear class-specific subspace learning techniques. While it has been shown that kernel-based Class-Specific Discriminant Analysis is able to provide excellent performance in small- and medium-scale face verification problems, its application in today's large-scale problems is difficult due to its training space and computational requirements. In this paper, generalizing our previous work on kernel-based class-specific discriminant analysis, we show that class-specific subspace learning can be cast as a regression problem. This allows us to derive linear, (reduced) kernel and neural network-based class-specific discriminant analysis methods using efficient batch and/or iterative training schemes, suited for large-scale learning problems. We test the performance of these methods in two datasets describing medium- and large-scale face verification problems.Comment: 9 pages, 4 figure

    Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives

    Full text link
    Part 2 of this monograph builds on the introduction to tensor networks and their operations presented in Part 1. It focuses on tensor network models for super-compressed higher-order representation of data/parameters and related cost functions, while providing an outline of their applications in machine learning and data analytics. A particular emphasis is on the tensor train (TT) and Hierarchical Tucker (HT) decompositions, and their physically meaningful interpretations which reflect the scalability of the tensor network approach. Through a graphical approach, we also elucidate how, by virtue of the underlying low-rank tensor approximations and sophisticated contractions of core tensors, tensor networks have the ability to perform distributed computations on otherwise prohibitively large volumes of data/parameters, thereby alleviating or even eliminating the curse of dimensionality. The usefulness of this concept is illustrated over a number of applied areas, including generalized regression and classification (support tensor machines, canonical correlation analysis, higher order partial least squares), generalized eigenvalue decomposition, Riemannian optimization, and in the optimization of deep neural networks. Part 1 and Part 2 of this work can be used either as stand-alone separate texts, or indeed as a conjoint comprehensive review of the exciting field of low-rank tensor networks and tensor decompositions.Comment: 232 page
    • …
    corecore