3,360 research outputs found

    The χ2\chi^2 - divergence and Mixing times of quantum Markov processes

    Get PDF
    We introduce quantum versions of the χ2\chi^2-divergence, provide a detailed analysis of their properties, and apply them in the investigation of mixing times of quantum Markov processes. An approach similar to the one presented in [1-3] for classical Markov chains is taken to bound the trace-distance from the steady state of a quantum processes. A strict spectral bound to the convergence rate can be given for time-discrete as well as for time-continuous quantum Markov processes. Furthermore the contractive behavior of the χ2\chi^2-divergence under the action of a completely positive map is investigated and contrasted to the contraction of the trace norm. In this context we analyse different versions of quantum detailed balance and, finally, give a geometric conductance bound to the convergence rate for unital quantum Markov processes

    Meta learning of bounds on the Bayes classifier error

    Full text link
    Meta learning uses information from base learners (e.g. classifiers or estimators) as well as information about the learning problem to improve upon the performance of a single base learner. For example, the Bayes error rate of a given feature space, if known, can be used to aid in choosing a classifier, as well as in feature selection and model selection for the base classifiers and the meta classifier. Recent work in the field of f-divergence functional estimation has led to the development of simple and rapidly converging estimators that can be used to estimate various bounds on the Bayes error. We estimate multiple bounds on the Bayes error using an estimator that applies meta learning to slowly converging plug-in estimators to obtain the parametric convergence rate. We compare the estimated bounds empirically on simulated data and then estimate the tighter bounds on features extracted from an image patch analysis of sunspot continuum and magnetogram images.Comment: 6 pages, 3 figures, to appear in proceedings of 2015 IEEE Signal Processing and SP Education Worksho

    Approximation Algorithms for Bregman Co-clustering and Tensor Clustering

    Full text link
    In the past few years powerful generalizations to the Euclidean k-means problem have been made, such as Bregman clustering [7], co-clustering (i.e., simultaneous clustering of rows and columns of an input matrix) [9,18], and tensor clustering [8,34]. Like k-means, these more general problems also suffer from the NP-hardness of the associated optimization. Researchers have developed approximation algorithms of varying degrees of sophistication for k-means, k-medians, and more recently also for Bregman clustering [2]. However, there seem to be no approximation algorithms for Bregman co- and tensor clustering. In this paper we derive the first (to our knowledge) guaranteed methods for these increasingly important clustering settings. Going beyond Bregman divergences, we also prove an approximation factor for tensor clustering with arbitrary separable metrics. Through extensive experiments we evaluate the characteristics of our method, and show that it also has practical impact.Comment: 18 pages; improved metric cas
    • …
    corecore