10,485 research outputs found
Group Membership Prediction
The group membership prediction (GMP) problem involves predicting whether or
not a collection of instances share a certain semantic property. For instance,
in kinship verification given a collection of images, the goal is to predict
whether or not they share a {\it familial} relationship. In this context we
propose a novel probability model and introduce latent {\em view-specific} and
{\em view-shared} random variables to jointly account for the view-specific
appearance and cross-view similarities among data instances. Our model posits
that data from each view is independent conditioned on the shared variables.
This postulate leads to a parametric probability model that decomposes group
membership likelihood into a tensor product of data-independent parameters and
data-dependent factors. We propose learning the data-independent parameters in
a discriminative way with bilinear classifiers, and test our prediction
algorithm on challenging visual recognition tasks such as multi-camera person
re-identification and kinship verification. On most benchmark datasets, our
method can significantly outperform the current state-of-the-art.Comment: accepted for ICCV 201
A Comparative Study of Pairwise Learning Methods based on Kernel Ridge Regression
Many machine learning problems can be formulated as predicting labels for a
pair of objects. Problems of that kind are often referred to as pairwise
learning, dyadic prediction or network inference problems. During the last
decade kernel methods have played a dominant role in pairwise learning. They
still obtain a state-of-the-art predictive performance, but a theoretical
analysis of their behavior has been underexplored in the machine learning
literature.
In this work we review and unify existing kernel-based algorithms that are
commonly used in different pairwise learning settings, ranging from matrix
filtering to zero-shot learning. To this end, we focus on closed-form efficient
instantiations of Kronecker kernel ridge regression. We show that independent
task kernel ridge regression, two-step kernel ridge regression and a linear
matrix filter arise naturally as a special case of Kronecker kernel ridge
regression, implying that all these methods implicitly minimize a squared loss.
In addition, we analyze universality, consistency and spectral filtering
properties. Our theoretical results provide valuable insights in assessing the
advantages and limitations of existing pairwise learning methods.Comment: arXiv admin note: text overlap with arXiv:1606.0427
Private Information in Large Games
Nash equilibrium, ex-post Nash,anonymous games, market games, rational expectations, extensive robustness, information proofness, web games
Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds
The Euclidean scattering transform was introduced nearly a decade ago to
improve the mathematical understanding of convolutional neural networks.
Inspired by recent interest in geometric deep learning, which aims to
generalize convolutional neural networks to manifold and graph-structured
domains, we define a geometric scattering transform on manifolds. Similar to
the Euclidean scattering transform, the geometric scattering transform is based
on a cascade of wavelet filters and pointwise nonlinearities. It is invariant
to local isometries and stable to certain types of diffeomorphisms. Empirical
results demonstrate its utility on several geometric learning tasks. Our
results generalize the deformation stability and local translation invariance
of Euclidean scattering, and demonstrate the importance of linking the used
filter structures to the underlying geometry of the data.Comment: 35 pages; 3 figures; 2 tables; v3: Revisions based on reviewer
comment
- …