133 research outputs found
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Combining information from various image features has become a standard
technique in concept recognition tasks. However, the optimal way of fusing the
resulting kernel functions is usually unknown in practical applications.
Multiple kernel learning (MKL) techniques allow to determine an optimal linear
combination of such similarity matrices. Classical approaches to MKL promote
sparse mixtures. Unfortunately, so-called 1-norm MKL variants are often
observed to be outperformed by an unweighted sum kernel. The contribution of
this paper is twofold: We apply a recently developed non-sparse MKL variant to
state-of-the-art concept recognition tasks within computer vision. We provide
insights on benefits and limits of non-sparse MKL and compare it against its
direct competitors, the sum kernel SVM and the sparse MKL. We report empirical
results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo
Annotation challenge data sets. About to be submitted to PLoS ONE.Comment: 18 pages, 8 tables, 4 figures, format deviating from plos one
submission format requirements for aesthetic reason
Hashing for Similarity Search: A Survey
Similarity search (nearest neighbor search) is a problem of pursuing the data
items whose distances to a query item are the smallest from a large database.
Various methods have been developed to address this problem, and recently a lot
of efforts have been devoted to approximate search. In this paper, we present a
survey on one of the main solutions, hashing, which has been widely studied
since the pioneering work locality sensitive hashing. We divide the hashing
algorithms two main categories: locality sensitive hashing, which designs hash
functions without exploring the data distribution and learning to hash, which
learns hash functions according the data distribution, and review them from
various aspects, including hash function design and distance measure and search
scheme in the hash coding space
Embed and Conquer: Scalable Embeddings for Kernel k-Means on MapReduce
The kernel -means is an effective method for data clustering which extends
the commonly-used -means algorithm to work on a similarity matrix over
complex data structures. The kernel -means algorithm is however
computationally very complex as it requires the complete data matrix to be
calculated and stored. Further, the kernelized nature of the kernel -means
algorithm hinders the parallelization of its computations on modern
infrastructures for distributed computing. In this paper, we are defining a
family of kernel-based low-dimensional embeddings that allows for scaling
kernel -means on MapReduce via an efficient and unified parallelization
strategy. Afterwards, we propose two methods for low-dimensional embedding that
adhere to our definition of the embedding family. Exploiting the proposed
parallelization strategy, we present two scalable MapReduce algorithms for
kernel -means. We demonstrate the effectiveness and efficiency of the
proposed algorithms through an empirical evaluation on benchmark data sets.Comment: Appears in Proceedings of the SIAM International Conference on Data
Mining (SDM), 201
Approximation Algorithms for Bregman Co-clustering and Tensor Clustering
In the past few years powerful generalizations to the Euclidean k-means
problem have been made, such as Bregman clustering [7], co-clustering (i.e.,
simultaneous clustering of rows and columns of an input matrix) [9,18], and
tensor clustering [8,34]. Like k-means, these more general problems also suffer
from the NP-hardness of the associated optimization. Researchers have developed
approximation algorithms of varying degrees of sophistication for k-means,
k-medians, and more recently also for Bregman clustering [2]. However, there
seem to be no approximation algorithms for Bregman co- and tensor clustering.
In this paper we derive the first (to our knowledge) guaranteed methods for
these increasingly important clustering settings. Going beyond Bregman
divergences, we also prove an approximation factor for tensor clustering with
arbitrary separable metrics. Through extensive experiments we evaluate the
characteristics of our method, and show that it also has practical impact.Comment: 18 pages; improved metric cas
Recommended from our members
Learning Theory and Approximation
The main goal of this workshop – the third one of this type at the MFO – has been to blend mathematical results from statistical learning theory and approximation theory to strengthen both disciplines and use synergistic effects to work on current research questions. Learning theory aims at modeling unknown function relations and data structures from samples in an automatic manner. Approximation theory is naturally used for the advancement and closely connected to the further development of learning theory, in particular for the exploration of new useful algorithms, and for the theoretical understanding of existing methods. Conversely, the study of learning theory also gives rise to interesting theoretical problems for approximation theory such as the approximation and sparse representation of functions or the construction of rich kernel reproducing Hilbert spaces on general metric spaces. This workshop has concentrated on the following recent topics: Pitchfork bifurcation of dynamical systems arising from mathematical foundations of cell development; regularized kernel based learning in the Big Data situation; deep learning; convergence rates of learning and online learning algorithms; numerical refinement algorithms to learning; statistical robustness of regularized kernel based learning
An Adaptive Tangent Feature Perspective of Neural Networks
In order to better understand feature learning in neural networks, we propose
a framework for understanding linear models in tangent feature space where the
features are allowed to be transformed during training. We consider linear
transformations of features, resulting in a joint optimization over parameters
and transformations with a bilinear interpolation constraint. We show that this
optimization problem has an equivalent linearly constrained optimization with
structured regularization that encourages approximately low rank solutions.
Specializing to neural network structure, we gain insights into how the
features and thus the kernel function change, providing additional nuance to
the phenomenon of kernel alignment when the target function is poorly
represented using tangent features. We verify our theoretical observations in
the kernel alignment of real neural networks.Comment: 14 pages, 3 figures. Appeared at the First Conference on Parsimony
and Learning (CPAL 2024
- …