88 research outputs found
A Framework for Improving the Characterization Scope of Stein's Method on Riemannian Manifolds
Stein's method has been widely used to achieve distributional approximations
for probability distributions defined in Euclidean spaces. Recently, techniques
to extend Stein's method to manifold-valued random variables with distributions
defined on the respective manifolds have been reported. However, several of
these methods impose strong regularity conditions on the distributions as well
as the manifolds and/or consider very special cases. In this paper, we present
a novel framework for Stein's method on Riemannian manifolds using the
Friedrichs extension technique applied to self-adjoint unbounded operators.
This framework is applicable to a variety of conventional and unconventional
situations, including but not limited to, intrinsically defined non-smooth
distributions, truncated distributions on Riemannian manifolds, distributions
on incomplete Riemannian manifolds, etc. Moreover, the stronger the regularity
conditions imposed on the manifolds or target distributions, the stronger will
be the characterization ability of our novel Stein pair, which facilitates the
application of Stein's method to problem domains hitherto uncharted. We present
several (non-numeric) examples illustrating the applicability of the presented
theory
Horocycle Decision Boundaries for Large Margin Classification in Hyperbolic Space
Hyperbolic spaces have been quite popular in the recent past for representing
hierarchically organized data. Further, several classification algorithms for
data in these spaces have been proposed in the literature. These algorithms
mainly use either hyperplanes or geodesics for decision boundaries in a large
margin classifiers setting leading to a non-convex optimization problem. In
this paper, we propose a novel large margin classifier based on horocycle
(horosphere) decision boundaries that leads to a geodesically convex
optimization problem that can be optimized using any Riemannian gradient
descent technique guaranteeing a globally optimal solution. We present several
experiments depicting the performance of our classifier
Recommended from our members
Maximizing all margins: Pushing face recognition with Kernel Plurality
We present two theses in this paper: First, performance of most existing face recognition algorithms improves if instead of the whole image, smaller patches are individually classified followed by label aggregation using voting. Second, weighted plurality voting outperforms other popular voting methods if the weights are set such that they maximize the victory margin for the winner with respect to each of the losers. Moreover, this can be done while taking higher order relationships among patches into account using kernels. We call this scheme Kernel Plurality. We verify our proposals with detailed experimental results and show that our framework with Kernel Plurality improves the performance of various face recognition algorithms beyond what has been previously reported in the literature. Furthermore, on five different benchmark datasets - Yale A, CMU PIE, MERL Dome, Extended Yale B and Multi-PIE, we show that Kernel Plurality in conjunction with recent face recognition algorithms can provide state-of-the-art results in terms of face recognition rates.Engineering and Applied Science
- …