34,241 research outputs found
Theory and Applications of Robust Optimization
In this paper we survey the primary research, both theoretical and applied,
in the area of Robust Optimization (RO). Our focus is on the computational
attractiveness of RO approaches, as well as the modeling power and broad
applicability of the methodology. In addition to surveying prominent
theoretical results of RO, we also present some recent results linking RO to
adaptable models for multi-stage decision-making problems. Finally, we
highlight applications of RO across a wide spectrum of domains, including
finance, statistics, learning, and various areas of engineering.Comment: 50 page
Structured variable selection in support vector machines
When applying the support vector machine (SVM) to high-dimensional
classification problems, we often impose a sparse structure in the SVM to
eliminate the influences of the irrelevant predictors. The lasso and other
variable selection techniques have been successfully used in the SVM to perform
automatic variable selection. In some problems, there is a natural hierarchical
structure among the variables. Thus, in order to have an interpretable SVM
classifier, it is important to respect the heredity principle when enforcing
the sparsity in the SVM. Many variable selection methods, however, do not
respect the heredity principle. In this paper we enforce both sparsity and the
heredity principle in the SVM by using the so-called structured variable
selection (SVS) framework originally proposed in Yuan, Joseph and Zou (2007).
We minimize the empirical hinge loss under a set of linear inequality
constraints and a lasso-type penalty. The solution always obeys the desired
heredity principle and enjoys sparsity. The new SVM classifier can be
efficiently fitted, because the optimization problem is a linear program.
Another contribution of this work is to present a nonparametric extension of
the SVS framework, and we propose nonparametric heredity SVMs. Simulated and
real data are used to illustrate the merits of the proposed method.Comment: Published in at http://dx.doi.org/10.1214/07-EJS125 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
A Simple Iterative Algorithm for Parsimonious Binary Kernel Fisher Discrimination
By applying recent results in optimization theory variously known as optimization transfer or majorize/minimize algorithms, an algorithm for binary, kernel, Fisher discriminant analysis is introduced that makes use of a non-smooth penalty on the coefficients to provide a parsimonious solution. The problem is converted into a smooth optimization that can be solved iteratively with no greater overhead than iteratively re-weighted least-squares. The result is simple, easily programmed and is shown to perform, in terms of both accuracy and parsimony, as well as or better than a number of leading machine learning algorithms on two well-studied and substantial benchmarks
Optimization with Sparsity-Inducing Penalties
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. They were first dedicated to linear variable
selection but numerous extensions have now emerged such as structured sparsity
or kernel selection. It turns out that many of the related estimation problems
can be cast as convex optimization problems by regularizing the empirical risk
with appropriate non-smooth norms. The goal of this paper is to present from a
general perspective optimization tools and techniques dedicated to such
sparsity-inducing penalties. We cover proximal methods, block-coordinate
descent, reweighted -penalized techniques, working-set and homotopy
methods, as well as non-convex formulations and extensions, and provide an
extensive set of experiments to compare various algorithms from a computational
point of view
Quadratic Projection Based Feature Extraction with Its Application to Biometric Recognition
This paper presents a novel quadratic projection based feature extraction
framework, where a set of quadratic matrices is learned to distinguish each
class from all other classes. We formulate quadratic matrix learning (QML) as a
standard semidefinite programming (SDP) problem. However, the con- ventional
interior-point SDP solvers do not scale well to the problem of QML for
high-dimensional data. To solve the scalability of QML, we develop an efficient
algorithm, termed DualQML, based on the Lagrange duality theory, to extract
nonlinear features. To evaluate the feasibility and effectiveness of the
proposed framework, we conduct extensive experiments on biometric recognition.
Experimental results on three representative biometric recogni- tion tasks,
including face, palmprint, and ear recognition, demonstrate the superiority of
the DualQML-based feature extraction algorithm compared to the current
state-of-the-art algorithm
- …