575 research outputs found
A Survey on Graph Kernels
Graph kernels have become an established and widely-used technique for
solving classification tasks on graphs. This survey gives a comprehensive
overview of techniques for kernel-based graph classification developed in the
past 15 years. We describe and categorize graph kernels based on properties
inherent to their design, such as the nature of their extracted graph features,
their method of computation and their applicability to problems in practice. In
an extensive experimental evaluation, we study the classification accuracy of a
large suite of graph kernels on established benchmarks as well as new datasets.
We compare the performance of popular kernels with several baseline methods and
study the effect of applying a Gaussian RBF kernel to the metric induced by a
graph kernel. In doing so, we find that simple baselines become competitive
after this transformation on some datasets. Moreover, we study the extent to
which existing graph kernels agree in their predictions (and prediction errors)
and obtain a data-driven categorization of kernels as result. Finally, based on
our experimental results, we derive a practitioner's guide to kernel-based
graph classification
A Metropolis-class sampler for targets with non-convex support
We aim to improve upon the exploration of the general-purpose random walk Metropolis algorithm when the target has non-convex support A⊂Rd, by reusing proposals in Ac which would otherwise be rejected. The algorithm is Metropolis-class and under standard conditions the chain satisfies a strong law of large numbers and central limit theorem. Theoretical and numerical evidence of improved performance relative to random walk Metropolis are provided. Issues of implementation are discussed and numerical examples, including applications to global optimisation and rare event sampling, are presented
Comparison inequalities and fastest-mixing Markov chains
We introduce a new partial order on the class of stochastically monotone
Markov kernels having a given stationary distribution on a given finite
partially ordered state space . When in this partial
order we say that and satisfy a comparison inequality. We establish
that if and are reversible and for , then . In
particular, in the time-homogeneous case we have for every
if and are reversible and , and using this we show that
(for suitable common initial distributions) the Markov chain with kernel
mixes faster than the chain with kernel , in the strong sense that
at every time the discrepancy - measured by total variation distance or
separation or -distance - between the law of and is smaller
than that between the law of and . Using comparison inequalities
together with specialized arguments to remove the stochastic monotonicity
restriction, we answer a question of Persi Diaconis by showing that, among all
symmetric birth-and-death kernels on the path , the
one (we call it the uniform chain) that produces fastest convergence from
initial state 0 to the uniform distribution has transition probability 1/2 in
each direction along each edge of the path, with holding probability 1/2 at
each endpoint.Comment: Published in at http://dx.doi.org/10.1214/12-AAP886 the Annals of
Applied Probability (http://www.imstat.org/aap/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …