15,802 research outputs found
Optimal Recovery of Local Truth
Probability mass curves the data space with horizons. Let f be a multivariate
probability density function with continuous second order partial derivatives.
Consider the problem of estimating the true value of f(z) > 0 at a single point
z, from n independent observations. It is shown that, the fastest possible
estimators (like the k-nearest neighbor and kernel) have minimum asymptotic
mean square errors when the space of observations is thought as conformally
curved. The optimal metric is shown to be generated by the Hessian of f in the
regions where the Hessian is definite. Thus, the peaks and valleys of f are
surrounded by singular horizons when the Hessian changes signature from
Riemannian to pseudo-Riemannian. Adaptive estimators based on the optimal
variable metric show considerable theoretical and practical improvements over
traditional methods. The formulas simplify dramatically when the dimension of
the data space is 4. The similarities with General Relativity are striking but
possibly illusory at this point. However, these results suggest that
nonparametric density estimation may have something new to say about current
physical theory.Comment: To appear in Proceedings of Maximum Entropy and Bayesian Methods
1999. Check also: http://omega.albany.edu:8008
Optimal intrinsic descriptors for non-rigid shape analysis
We propose novel point descriptors for 3D shapes with the potential to match two shapes representing the same object undergoing natural deformations. These deformations are more general than the often assumed isometries, and we use labeled training data to learn optimal descriptors for such cases. Furthermore, instead of explicitly defining the descriptor, we introduce new Mercer kernels, for which we formally show that their corresponding feature space mapping is a generalization of either the Heat Kernel Signature or the Wave Kernel Signature. I.e. the proposed descriptors are guaranteed to be at least as precise as any Heat Kernel Signature or Wave Kernel Signature of any parameterisation. In experiments, we show that our implicitly defined, infinite-dimensional descriptors can better deal with non-isometric deformations than state-of-the-art methods
A relaxed approach for curve matching with elastic metrics
In this paper we study a class of Riemannian metrics on the space of
unparametrized curves and develop a method to compute geodesics with given
boundary conditions. It extends previous works on this topic in several
important ways. The model and resulting matching algorithm integrate within one
common setting both the family of -metrics with constant coefficients and
scale-invariant -metrics on both open and closed immersed curves. These
families include as particular cases the class of first-order elastic metrics.
An essential difference with prior approaches is the way that boundary
constraints are dealt with. By leveraging varifold-based similarity metrics we
propose a relaxed variational formulation for the matching problem that avoids
the necessity of optimizing over the reparametrization group. Furthermore, we
show that we can also quotient out finite-dimensional similarity groups such as
translation, rotation and scaling groups. The different properties and
advantages are illustrated through numerical examples in which we also provide
a comparison with related diffeomorphic methods used in shape registration.Comment: 27 page
Diffeomorphic Learning
We introduce in this paper a learning paradigm in which the training data is
transformed by a diffeomorphic transformation before prediction. The learning
algorithm minimizes a cost function evaluating the prediction error on the
training set penalized by the distance between the diffeomorphism and the
identity. The approach borrows ideas from shape analysis where diffeomorphisms
are estimated for shape and image alignment, and brings them in a previously
unexplored setting, estimating, in particular diffeomorphisms in much larger
dimensions. After introducing the concept and describing a learning algorithm,
we present diverse applications, mostly with synthetic examples, demonstrating
the potential of the approach, as well as some insight on how it can be
improved
- …