575 research outputs found

    A Survey on Graph Kernels

    Get PDF
    Graph kernels have become an established and widely-used technique for solving classification tasks on graphs. This survey gives a comprehensive overview of techniques for kernel-based graph classification developed in the past 15 years. We describe and categorize graph kernels based on properties inherent to their design, such as the nature of their extracted graph features, their method of computation and their applicability to problems in practice. In an extensive experimental evaluation, we study the classification accuracy of a large suite of graph kernels on established benchmarks as well as new datasets. We compare the performance of popular kernels with several baseline methods and study the effect of applying a Gaussian RBF kernel to the metric induced by a graph kernel. In doing so, we find that simple baselines become competitive after this transformation on some datasets. Moreover, we study the extent to which existing graph kernels agree in their predictions (and prediction errors) and obtain a data-driven categorization of kernels as result. Finally, based on our experimental results, we derive a practitioner's guide to kernel-based graph classification

    A Metropolis-class sampler for targets with non-convex support

    Get PDF
    We aim to improve upon the exploration of the general-purpose random walk Metropolis algorithm when the target has non-convex support A⊂Rd, by reusing proposals in Ac which would otherwise be rejected. The algorithm is Metropolis-class and under standard conditions the chain satisfies a strong law of large numbers and central limit theorem. Theoretical and numerical evidence of improved performance relative to random walk Metropolis are provided. Issues of implementation are discussed and numerical examples, including applications to global optimisation and rare event sampling, are presented

    Comparison inequalities and fastest-mixing Markov chains

    Full text link
    We introduce a new partial order on the class of stochastically monotone Markov kernels having a given stationary distribution π\pi on a given finite partially ordered state space X\mathcal{X}. When K⪯LK\preceq L in this partial order we say that KK and LL satisfy a comparison inequality. We establish that if K1,…,KtK_1,\ldots,K_t and L1,…,LtL_1,\ldots,L_t are reversible and Ks⪯LsK_s\preceq L_s for s=1,…,ts=1,\ldots,t, then K1⋯Kt⪯L1⋯LtK_1\cdots K_t\preceq L_1\cdots L_t. In particular, in the time-homogeneous case we have Kt⪯LtK^t\preceq L^t for every tt if KK and LL are reversible and K⪯LK\preceq L, and using this we show that (for suitable common initial distributions) the Markov chain YY with kernel KK mixes faster than the chain ZZ with kernel LL, in the strong sense that at every time tt the discrepancy - measured by total variation distance or separation or L2L^2-distance - between the law of YtY_t and π\pi is smaller than that between the law of ZtZ_t and π\pi. Using comparison inequalities together with specialized arguments to remove the stochastic monotonicity restriction, we answer a question of Persi Diaconis by showing that, among all symmetric birth-and-death kernels on the path X={0,…,n}\mathcal{X}=\{0,\ldots,n\}, the one (we call it the uniform chain) that produces fastest convergence from initial state 0 to the uniform distribution has transition probability 1/2 in each direction along each edge of the path, with holding probability 1/2 at each endpoint.Comment: Published in at http://dx.doi.org/10.1214/12-AAP886 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore