8,353 research outputs found
A unified framework for solving a general class of conditional and robust set-membership estimation problems
In this paper we present a unified framework for solving a general class of
problems arising in the context of set-membership estimation/identification
theory. More precisely, the paper aims at providing an original approach for
the computation of optimal conditional and robust projection estimates in a
nonlinear estimation setting where the operator relating the data and the
parameter to be estimated is assumed to be a generic multivariate polynomial
function and the uncertainties affecting the data are assumed to belong to
semialgebraic sets. By noticing that the computation of both the conditional
and the robust projection optimal estimators requires the solution to min-max
optimization problems that share the same structure, we propose a unified
two-stage approach based on semidefinite-relaxation techniques for solving such
estimation problems. The key idea of the proposed procedure is to recognize
that the optimal functional of the inner optimization problems can be
approximated to any desired precision by a multivariate polynomial function by
suitably exploiting recently proposed results in the field of parametric
optimization. Two simulation examples are reported to show the effectiveness of
the proposed approach.Comment: Accpeted for publication in the IEEE Transactions on Automatic
Control (2014
A Geometric View on Constrained M-Estimators
We study the estimation error of constrained M-estimators, and derive
explicit upper bounds on the expected estimation error determined by the
Gaussian width of the constraint set. Both of the cases where the true
parameter is on the boundary of the constraint set (matched constraint), and
where the true parameter is strictly in the constraint set (mismatched
constraint) are considered. For both cases, we derive novel universal
estimation error bounds for regression in a generalized linear model with the
canonical link function. Our error bound for the mismatched constraint case is
minimax optimal in terms of its dependence on the sample size, for Gaussian
linear regression by the Lasso
A Family of Subgradient-Based Methods for Convex Optimization Problems in a Unifying Framework
We propose a new family of subgradient- and gradient-based methods which
converges with optimal complexity for convex optimization problems whose
feasible region is simple enough. This includes cases where the objective
function is non-smooth, smooth, have composite/saddle structure, or are given
by an inexact oracle model. We unified the way of constructing the subproblems
which are necessary to be solved at each iteration of these methods. This
permitted us to analyze the convergence of these methods in a unified way
compared to previous results which required different approaches for each
method/algorithm. Our contribution rely on two well-known methods in non-smooth
convex optimization: the mirror-descent method by Nemirovski-Yudin and the
dual-averaging method by Nesterov. Therefore, our family of methods includes
them and many other methods as particular cases. For instance, the proposed
family of classical gradient methods and its accelerations generalize Devolder
et al.'s, Nesterov's primal/dual gradient methods, and Tseng's accelerated
proximal gradient methods. Also our family of methods can partially become
special cases of other universal methods, too. As an additional contribution,
the novel extended mirror-descent method removes the compactness assumption of
the feasible region and the fixation of the total number of iterations which is
required by the original mirror-descent method in order to attain the optimal
complexity.Comment: 31 pages. v3: Major revision. Research Report B-477, Department of
Mathematical and Computing Sciences, Tokyo Institute of Technology, February
201
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …