1,914 research outputs found
Numerical Approaches for Linear Left-invariant Diffusions on SE(2), their Comparison to Exact Solutions, and their Applications in Retinal Imaging
Left-invariant PDE-evolutions on the roto-translation group (and
their resolvent equations) have been widely studied in the fields of cortical
modeling and image analysis. They include hypo-elliptic diffusion (for contour
enhancement) proposed by Citti & Sarti, and Petitot, and they include the
direction process (for contour completion) proposed by Mumford. This paper
presents a thorough study and comparison of the many numerical approaches,
which, remarkably, is missing in the literature. Existing numerical approaches
can be classified into 3 categories: Finite difference methods, Fourier based
methods (equivalent to -Fourier methods), and stochastic methods (Monte
Carlo simulations). There are also 3 types of exact solutions to the
PDE-evolutions that were derived explicitly (in the spatial Fourier domain) in
previous works by Duits and van Almsick in 2005. Here we provide an overview of
these 3 types of exact solutions and explain how they relate to each of the 3
numerical approaches. We compute relative errors of all numerical approaches to
the exact solutions, and the Fourier based methods show us the best performance
with smallest relative errors. We also provide an improvement of Mathematica
algorithms for evaluating Mathieu-functions, crucial in implementations of the
exact solutions. Furthermore, we include an asymptotical analysis of the
singularities within the kernels and we propose a probabilistic extension of
underlying stochastic processes that overcomes the singular behavior in the
origin of time-integrated kernels. Finally, we show retinal imaging
applications of combining left-invariant PDE-evolutions with invertible
orientation scores.Comment: A final and corrected version of the manuscript is Published in
Numerical Mathematics: Theory, Methods and Applications (NM-TMA), vol. (9),
p.1-50, 201
Time-causal and time-recursive spatio-temporal receptive fields
We present an improved model and theory for time-causal and time-recursive
spatio-temporal receptive fields, based on a combination of Gaussian receptive
fields over the spatial domain and first-order integrators or equivalently
truncated exponential filters coupled in cascade over the temporal domain.
Compared to previous spatio-temporal scale-space formulations in terms of
non-enhancement of local extrema or scale invariance, these receptive fields
are based on different scale-space axiomatics over time by ensuring
non-creation of new local extrema or zero-crossings with increasing temporal
scale. Specifically, extensions are presented about (i) parameterizing the
intermediate temporal scale levels, (ii) analysing the resulting temporal
dynamics, (iii) transferring the theory to a discrete implementation, (iv)
computing scale-normalized spatio-temporal derivative expressions for
spatio-temporal feature detection and (v) computational modelling of receptive
fields in the lateral geniculate nucleus (LGN) and the primary visual cortex
(V1) in biological vision.
We show that by distributing the intermediate temporal scale levels according
to a logarithmic distribution, we obtain much faster temporal response
properties (shorter temporal delays) compared to a uniform distribution.
Specifically, these kernels converge very rapidly to a limit kernel possessing
true self-similar scale-invariant properties over temporal scales, thereby
allowing for true scale invariance over variations in the temporal scale,
although the underlying temporal scale-space representation is based on a
discretized temporal scale parameter.
We show how scale-normalized temporal derivatives can be defined for these
time-causal scale-space kernels and how the composed theory can be used for
computing basic types of scale-normalized spatio-temporal derivative
expressions in a computationally efficient manner.Comment: 39 pages, 12 figures, 5 tables in Journal of Mathematical Imaging and
Vision, published online Dec 201
Learning with Algebraic Invariances, and the Invariant Kernel Trick
When solving data analysis problems it is important to integrate prior
knowledge and/or structural invariances. This paper contributes by a novel
framework for incorporating algebraic invariance structure into kernels. In
particular, we show that algebraic properties such as sign symmetries in data,
phase independence, scaling etc. can be included easily by essentially
performing the kernel trick twice. We demonstrate the usefulness of our theory
in simulations on selected applications such as sign-invariant spectral
clustering and underdetermined ICA
Support Vector Machines in High Energy Physics
This lecture will introduce the Support Vector algorithms for classification
and regression. They are an application of the so called kernel trick, which
allows the extension of a certain class of linear algorithms to the non linear
case. The kernel trick will be introduced and in the context of structural risk
minimization, large margin algorithms for classification and regression will be
presented. Current applications in high energy physics will be discussed.Comment: 11 pages, 12 figures. Part of the proceedings of the Track
'Computational Intelligence for HEP Data Analysis' at iCSC 200
New Exact and Numerical Solutions of the (Convection-)Diffusion Kernels on SE(3)
We consider hypo-elliptic diffusion and convection-diffusion on , the quotient of the Lie group of rigid body motions SE(3) in
which group elements are equivalent if they are equal up to a rotation around
the reference axis. We show that we can derive expressions for the convolution
kernels in terms of eigenfunctions of the PDE, by extending the approach for
the SE(2) case. This goes via application of the Fourier transform of the PDE
in the spatial variables, yielding a second order differential operator. We
show that the eigenfunctions of this operator can be expressed as (generalized)
spheroidal wave functions. The same exact formulas are derived via the Fourier
transform on SE(3). We solve both the evolution itself, as well as the
time-integrated process that corresponds to the resolvent operator.
Furthermore, we have extended a standard numerical procedure from SE(2) to
SE(3) for the computation of the solution kernels that is directly related to
the exact solutions. Finally, we provide a novel analytic approximation of the
kernels that we briefly compare to the exact kernels.Comment: Revised and restructure
On Invariance, Equivariance, Correlation and Convolution of Spherical Harmonic Representations for Scalar and Vectorial Data
The mathematical representations of data in the Spherical Harmonic (SH)
domain has recently regained increasing interest in the machine learning
community. This technical report gives an in-depth introduction to the
theoretical foundation and practical implementation of SH representations,
summarizing works on rotation invariant and equivariant features, as well as
convolutions and exact correlations of signals on spheres. In extension, these
methods are then generalized from scalar SH representations to Vectorial
Harmonics (VH), providing the same capabilities for 3d vector fields on spheresComment: 106 pages, tech repor
Learning with group invariant features: A Kernel perspective
We analyze in this paper a random feature map based on a theory of invariance (I-theory) introduced in [1]. More specifically, a group invariant signal signature is obtained through cumulative distributions of group-transformed random projections. Our analysis bridges invariant feature learning with kernel methods, as we show that this feature map defines an expected Haar-integration kernel that is invariant to the specified group action. We show how this non-linear random feature map approximates this group invariant kernel uniformly on a set of N points. Moreover, we show that it defines a function space that is dense in the equivalent Invariant Reproducing Kernel Hilbert Space. Finally, we quantify error rates of the convergence of the empirical risk minimization, as well as the reduction in the sample complexity of a learning algorithm using such an invariant representation for signal classification, in a classical supervised learning setting
- …