980 research outputs found
Learning with Algebraic Invariances, and the Invariant Kernel Trick
When solving data analysis problems it is important to integrate prior
knowledge and/or structural invariances. This paper contributes by a novel
framework for incorporating algebraic invariance structure into kernels. In
particular, we show that algebraic properties such as sign symmetries in data,
phase independence, scaling etc. can be included easily by essentially
performing the kernel trick twice. We demonstrate the usefulness of our theory
in simulations on selected applications such as sign-invariant spectral
clustering and underdetermined ICA
Representation by Integrating Reproducing Kernels
Based on direct integrals, a framework allowing to integrate a parametrised
family of reproducing kernels with respect to some measure on the parameter
space is developed. By pointwise integration, one obtains again a reproducing
kernel whose corresponding Hilbert space is given as the image of the direct
integral of the individual Hilbert spaces under the summation operator. This
generalises the well-known results for finite sums of reproducing kernels;
however, many more special cases are subsumed under this approach: so-called
Mercer kernels obtained through series expansions; kernels generated by
integral transforms; mixtures of positive definite functions; and in particular
scale-mixtures of radial basis functions. This opens new vistas into known
results, e.g. generalising the Kramer sampling theorem; it also offers
interesting connections between measurements and integral transforms, e.g.
allowing to apply the representer theorem in certain inverse problems, or
bounding the pointwise error in the image domain when observing the pre-image
under an integral transform
On the Inductive Bias of Neural Tangent Kernels
State-of-the-art neural networks are heavily over-parameterized, making the
optimization algorithm a crucial ingredient for learning predictive models with
good generalization properties. A recent line of work has shown that in a
certain over-parameterized regime, the learning dynamics of gradient descent
are governed by a certain kernel obtained at initialization, called the neural
tangent kernel. We study the inductive bias of learning in such a regime by
analyzing this kernel and the corresponding function space (RKHS). In
particular, we study smoothness, approximation, and stability properties of
functions with finite norm, including stability to image deformations in the
case of convolutional networks, and compare to other known kernels for similar
architectures.Comment: NeurIPS 201
Estimates for Fourier sums and eigenvalues of integral operators via multipliers on the sphere
We provide estimates for weighted Fourier sums of integrable functions
defined on the sphere when the weights originate from a multiplier operator
acting on the space where the function belongs. That implies refined estimates
for weighted Fourier sums of integrable kernels on the sphere that satisfy an
abstract H\"{o}lder condition based on a parameterized family of multiplier
operators defining an approximate identity. This general estimation approach
includes an important class of multipliers operators, namely, that defined by
convolutions with zonal measures. The estimates are used to obtain decay rates
for the eigenvalues of positive integral operators on L^2(Sm) and generated by
a kernel satisfying the H\"{o}lder condition based on multiplier operators on
L^2(Sm).Comment: 15 page
Optimal intrinsic descriptors for non-rigid shape analysis
We propose novel point descriptors for 3D shapes with the potential to match two shapes representing the same object undergoing natural deformations. These deformations are more general than the often assumed isometries, and we use labeled training data to learn optimal descriptors for such cases. Furthermore, instead of explicitly defining the descriptor, we introduce new Mercer kernels, for which we formally show that their corresponding feature space mapping is a generalization of either the Heat Kernel Signature or the Wave Kernel Signature. I.e. the proposed descriptors are guaranteed to be at least as precise as any Heat Kernel Signature or Wave Kernel Signature of any parameterisation. In experiments, we show that our implicitly defined, infinite-dimensional descriptors can better deal with non-isometric deformations than state-of-the-art methods
Determinantal point process models on the sphere
We consider determinantal point processes on the -dimensional unit sphere
. These are finite point processes exhibiting repulsiveness and
with moment properties determined by a certain determinant whose entries are
specified by a so-called kernel which we assume is a complex covariance
function defined on . We review the appealing
properties of such processes, including their specific moment properties,
density expressions and simulation procedures. Particularly, we characterize
and construct isotropic DPPs models on , where it becomes
essential to specify the eigenvalues and eigenfunctions in a spectral
representation for the kernel, and we figure out how repulsive isotropic DPPs
can be. Moreover, we discuss the shortcomings of adapting existing models for
isotropic covariance functions and consider strategies for developing new
models, including a useful spectral approach
- …