529 research outputs found
Blind Minimax Estimation
We consider the linear regression problem of estimating an unknown,
deterministic parameter vector based on measurements corrupted by colored
Gaussian noise. We present and analyze blind minimax estimators (BMEs), which
consist of a bounded parameter set minimax estimator, whose parameter set is
itself estimated from measurements. Thus, one does not require any prior
assumption or knowledge, and the proposed estimator can be applied to any
linear regression problem. We demonstrate analytically that the BMEs strictly
dominate the least-squares estimator, i.e., they achieve lower mean-squared
error for any value of the parameter vector. Both Stein's estimator and its
positive-part correction can be derived within the blind minimax framework.
Furthermore, our approach can be readily extended to a wider class of
estimation problems than Stein's estimator, which is defined only for white
noise and non-transformed measurements. We show through simulations that the
BMEs generally outperform previous extensions of Stein's technique.Comment: 12 pages, 7 figure
Compressed Sensing over -balls: Minimax Mean Square Error
We consider the compressed sensing problem, where the object x_0 \in \bR^N
is to be recovered from incomplete measurements ; here the
sensing matrix is an random matrix with iid Gaussian entries
and . A popular method of sparsity-promoting reconstruction is
-penalized least-squares reconstruction (aka LASSO, Basis Pursuit).
It is currently popular to consider the strict sparsity model, where the
object is nonzero in only a small fraction of entries. In this paper, we
instead consider the much more broadly applicable -sparsity model,
where is sparse in the sense of having norm bounded by for some fixed .
We study an asymptotic regime in which and both tend to infinity with
limiting ratio , both in the noisy () and
noiseless () cases. Under weak assumptions on , we are able to
precisely evaluate the worst-case asymptotic minimax mean-squared
reconstruction error (AMSE) for penalized least-squares: min over
penalization parameters, max over -sparse objects . We exhibit the
asymptotically least-favorable object (hardest sparse signal to recover) and
the maximin penalization.
Our explicit formulas unexpectedly involve quantities appearing classically
in statistical decision theory. Occurring in the present setting, they reflect
a deeper connection between penalized minimization and scalar soft
thresholding. This connection, which follows from earlier work of the authors
and collaborators on the AMP iterative thresholding algorithm, is carefully
explained.
Our approach also gives precise results under weak- ball coefficient
constraints, as we show here.Comment: 41 pages, 11 pdf figure
A mathematical framework for new fault detection schemes in nonlinear stochastic continuous-time dynamical systems
n this work, a mathematical unifying framework for designing new fault detection schemes in nonlinear stochastic continuous-time dynamical systems is developed. These schemes are based on a stochastic process, called the residual, which reflects the system behavior and whose changes are to be detected. A quickest detection scheme for the residual is proposed, which is based on the computed likelihood ratios for time-varying statistical changes in the Ornstein–Uhlenbeck process. Several expressions are provided, depending on a priori knowledge of the fault, which can be employed in a proposed CUSUM-type approximated scheme. This general setting gathers different existing fault detection schemes within a unifying framework, and allows for the definition of new ones. A comparative simulation example illustrates the behavior of the proposed schemes
Near-Optimal Recovery of Linear and N-Convex Functions on Unions of Convex Sets
In this paper we build provably near-optimal, in the minimax sense, estimates
of linear forms and, more generally, "-convex functionals" (the simplest
example being the maximum of several fractional-linear functions) of unknown
"signal" known to belong to the union of finitely many convex compact sets from
indirect noisy observations of the signal. Our main assumption is that the
observation scheme in question is good in the sense of A. Goldenshluger, A.
Juditsky, A. Nemirovski, Electr. J. Stat. 9(2) (2015), arXiv:1311.6765, the
simplest example being the Gaussian scheme where the observation is the sum of
linear image of the signal and the standard Gaussian noise. The proposed
estimates, same as upper bounds on their worst-case risks, stem from solutions
to explicit convex optimization problems, making the estimates
"computation-friendly.
Regularization and Bayesian Learning in Dynamical Systems: Past, Present and Future
Regularization and Bayesian methods for system identification have been
repopularized in the recent years, and proved to be competitive w.r.t.
classical parametric approaches. In this paper we shall make an attempt to
illustrate how the use of regularization in system identification has evolved
over the years, starting from the early contributions both in the Automatic
Control as well as Econometrics and Statistics literature. In particular we
shall discuss some fundamental issues such as compound estimation problems and
exchangeability which play and important role in regularization and Bayesian
approaches, as also illustrated in early publications in Statistics. The
historical and foundational issues will be given more emphasis (and space), at
the expense of the more recent developments which are only briefly discussed.
The main reason for such a choice is that, while the recent literature is
readily available, and surveys have already been published on the subject, in
the author's opinion a clear link with past work had not been completely
clarified.Comment: Plenary Presentation at the IFAC SYSID 2015. Submitted to Annual
Reviews in Contro
- …