11,051 research outputs found
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Learning Output Kernels for Multi-Task Problems
Simultaneously solving multiple related learning tasks is beneficial under a
variety of circumstances, but the prior knowledge necessary to correctly model
task relationships is rarely available in practice. In this paper, we develop a
novel kernel-based multi-task learning technique that automatically reveals
structural inter-task relationships. Building over the framework of output
kernel learning (OKL), we introduce a method that jointly learns multiple
functions and a low-rank multi-task kernel by solving a non-convex
regularization problem. Optimization is carried out via a block coordinate
descent strategy, where each subproblem is solved using suitable conjugate
gradient (CG) type iterative methods for linear operator equations. The
effectiveness of the proposed approach is demonstrated on pharmacological and
collaborative filtering data
Scalable Kernel Methods via Doubly Stochastic Gradients
The general perception is that kernel methods are not scalable, and neural
nets are the methods of choice for nonlinear learning problems. Or have we
simply not tried hard enough for kernel methods? Here we propose an approach
that scales up kernel methods using a novel concept called "doubly stochastic
functional gradients". Our approach relies on the fact that many kernel methods
can be expressed as convex optimization problems, and we solve the problems by
making two unbiased stochastic approximations to the functional gradient, one
using random training points and another using random functions associated with
the kernel, and then descending using this noisy functional gradient. We show
that a function produced by this procedure after iterations converges to
the optimal function in the reproducing kernel Hilbert space in rate ,
and achieves a generalization performance of . This doubly
stochasticity also allows us to avoid keeping the support vectors and to
implement the algorithm in a small memory footprint, which is linear in number
of iterations and independent of data dimension. Our approach can readily scale
kernel methods up to the regimes which are dominated by neural nets. We show
that our method can achieve competitive performance to neural nets in datasets
such as 8 million handwritten digits from MNIST, 2.3 million energy materials
from MolecularSpace, and 1 million photos from ImageNet.Comment: 32 pages, 22 figure
A General Two-Step Approach to Learning-Based Hashing
Most existing approaches to hashing apply a single form of hash function, and
an optimization process which is typically deeply coupled to this specific
form. This tight coupling restricts the flexibility of the method to respond to
the data, and can result in complex optimization problems that are difficult to
solve. Here we propose a flexible yet simple framework that is able to
accommodate different types of loss functions and hash functions. This
framework allows a number of existing approaches to hashing to be placed in
context, and simplifies the development of new problem-specific hashing
methods. Our framework decomposes hashing learning problem into two steps: hash
bit learning and hash function learning based on the learned bits. The first
step can typically be formulated as binary quadratic problems, and the second
step can be accomplished by training standard binary classifiers. Both problems
have been extensively studied in the literature. Our extensive experiments
demonstrate that the proposed framework is effective, flexible and outperforms
the state-of-the-art.Comment: 13 pages. Appearing in Int. Conf. Computer Vision (ICCV) 201
Fixed-point and coordinate descent algorithms for regularized kernel methods
In this paper, we study two general classes of optimization algorithms for
kernel methods with convex loss function and quadratic norm regularization, and
analyze their convergence. The first approach, based on fixed-point iterations,
is simple to implement and analyze, and can be easily parallelized. The second,
based on coordinate descent, exploits the structure of additively separable
loss functions to compute solutions of line searches in closed form. Instances
of these general classes of algorithms are already incorporated into state of
the art machine learning software for large scale problems. We start from a
solution characterization of the regularized problem, obtained using
sub-differential calculus and resolvents of monotone operators, that holds for
general convex loss functions regardless of differentiability. The two
methodologies described in the paper can be regarded as instances of non-linear
Jacobi and Gauss-Seidel algorithms, and are both well-suited to solve large
scale problems
- …