34,341 research outputs found
Evolutionary algorithms for robust methods
A drawback of robust statistical techniques is the increased computational effort often needed compared to non robust methods. Robust estimators possessing the exact fit property, for example, are NP-hard to compute. This means thatunder the widely believed assumption that the computational complexity classes NP and P are not equalthere is no hope to compute exact solutions for large high dimensional data sets. To tackle this problem, search heuristics are used to compute NP-hard estimators in high dimensions. Here, an evolutionary algorithm that is applicable to different robust estimators is presented. Further, variants of this evolutionary algorithm for selected estimatorsmost prominently least trimmed squares and least median of squaresare introduced and shown to outperform existing popular search heuristics in difficult data situations. The results increase the applicability of robust methods and underline the usefulness of evolutionary computation for computational statistics. --Evolutionary algorithms,robust regression,least trimmed squares (LTS),least median of squares (LMS),least quantile of squares (LQS),least quartile difference (LQD)
Least quantile regression via modern optimization
We address the Least Quantile of Squares (LQS) (and in particular the Least
Median of Squares) regression problem using modern optimization methods. We
propose a Mixed Integer Optimization (MIO) formulation of the LQS problem which
allows us to find a provably global optimal solution for the LQS problem. Our
MIO framework has the appealing characteristic that if we terminate the
algorithm early, we obtain a solution with a guarantee on its sub-optimality.
We also propose continuous optimization methods based on first-order
subdifferential methods, sequential linear optimization and hybrid combinations
of them to obtain near optimal solutions to the LQS problem. The MIO algorithm
is found to benefit significantly from high quality solutions delivered by our
continuous optimization based methods. We further show that the MIO approach
leads to (a) an optimal solution for any dataset, where the data-points
's are not necessarily in general position, (b) a simple
proof of the breakdown point of the LQS objective value that holds for any
dataset and (c) an extension to situations where there are polyhedral
constraints on the regression coefficient vector. We report computational
results with both synthetic and real-world datasets showing that the MIO
algorithm with warm starts from the continuous optimization methods solve small
() and medium () size problems to provable optimality in under
two hours, and outperform all publicly available methods for large-scale
(10,000) LQS problems.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1223 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
The Degrees of Freedom of Partial Least Squares Regression
The derivation of statistical properties for Partial Least Squares regression
can be a challenging task. The reason is that the construction of latent
components from the predictor variables also depends on the response variable.
While this typically leads to good performance and interpretable models in
practice, it makes the statistical analysis more involved. In this work, we
study the intrinsic complexity of Partial Least Squares Regression. Our
contribution is an unbiased estimate of its Degrees of Freedom. It is defined
as the trace of the first derivative of the fitted values, seen as a function
of the response. We establish two equivalent representations that rely on the
close connection of Partial Least Squares to matrix decompositions and Krylov
subspace techniques. We show that the Degrees of Freedom depend on the
collinearity of the predictor variables: The lower the collinearity is, the
higher the Degrees of Freedom are. In particular, they are typically higher
than the naive approach that defines the Degrees of Freedom as the number of
components. Further, we illustrate how the Degrees of Freedom approach can be
used for the comparison of different regression methods. In the experimental
section, we show that our Degrees of Freedom estimate in combination with
information criteria is useful for model selection.Comment: to appear in the Journal of the American Statistical Associatio
SOCP relaxation bounds for the optimal subset selection problem applied to robust linear regression
This paper deals with the problem of finding the globally optimal subset of h
elements from a larger set of n elements in d space dimensions so as to
minimize a quadratic criterion, with an special emphasis on applications to
computing the Least Trimmed Squares Estimator (LTSE) for robust regression. The
computation of the LTSE is a challenging subset selection problem involving a
nonlinear program with continuous and binary variables, linked in a highly
nonlinear fashion. The selection of a globally optimal subset using the branch
and bound (BB) algorithm is limited to problems in very low dimension,
tipically d<5, as the complexity of the problem increases exponentially with d.
We introduce a bold pruning strategy in the BB algorithm that results in a
significant reduction in computing time, at the price of a negligeable accuracy
lost. The novelty of our algorithm is that the bounds at nodes of the BB tree
come from pseudo-convexifications derived using a linearization technique with
approximate bounds for the nonlinear terms. The approximate bounds are computed
solving an auxiliary semidefinite optimization problem. We show through a
computational study that our algorithm performs well in a wide set of the most
difficult instances of the LTSE problem.Comment: 12 pages, 3 figures, 2 table
- …