192 research outputs found
On detecting harmonic oscillations
In this paper, we focus on the following testing problem: assume that we are
given observations of a real-valued signal along the grid ,
corrupted by white Gaussian noise. We want to distinguish between two
hypotheses: (a) the signal is a nuisance - a linear combination of
harmonic oscillations of known frequencies, and (b) signal is the sum of a
nuisance and a linear combination of a given number of harmonic
oscillations with unknown frequencies, and such that the distance (measured in
the uniform norm on the grid) between the signal and the set of nuisances is at
least . We propose a computationally efficient test for distinguishing
between (a) and (b) and show that its "resolution" (the smallest value of
for which (a) and (b) are distinguished with a given confidence
) is , with the hidden factor
depending solely on and and independent of the frequencies in
question. We show that this resolution, up to a factor which is polynomial in
and logarithmic in , is the best possible under circumstances. We
further extend the outlined results to the case of nuisances and signals close
to linear combinations of harmonic oscillations, and provide illustrative
numerical results.Comment: Published at http://dx.doi.org/10.3150/14-BEJ600 in the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
Near-Optimal Recovery of Linear and N-Convex Functions on Unions of Convex Sets
In this paper we build provably near-optimal, in the minimax sense, estimates
of linear forms and, more generally, "-convex functionals" (the simplest
example being the maximum of several fractional-linear functions) of unknown
"signal" known to belong to the union of finitely many convex compact sets from
indirect noisy observations of the signal. Our main assumption is that the
observation scheme in question is good in the sense of A. Goldenshluger, A.
Juditsky, A. Nemirovski, Electr. J. Stat. 9(2) (2015), arXiv:1311.6765, the
simplest example being the Gaussian scheme where the observation is the sum of
linear image of the signal and the standard Gaussian noise. The proposed
estimates, same as upper bounds on their worst-case risks, stem from solutions
to explicit convex optimization problems, making the estimates
"computation-friendly.
Solving Variational Inequalities with Monotone Operators on Domains Given by Linear Minimization Oracles
The standard algorithms for solving large-scale convex-concave saddle point
problems, or, more generally, variational inequalities with monotone operators,
are proximal type algorithms which at every iteration need to compute a
prox-mapping, that is, to minimize over problem's domain the sum of a
linear form and the specific convex distance-generating function underlying the
algorithms in question. Relative computational simplicity of prox-mappings,
which is the standard requirement when implementing proximal algorithms,
clearly implies the possibility to equip with a relatively computationally
cheap Linear Minimization Oracle (LMO) able to minimize over linear forms.
There are, however, important situations where a cheap LMO indeed is available,
but where no proximal setup with easy-to-compute prox-mappings is known. This
fact motivates our goal in this paper, which is to develop techniques for
solving variational inequalities with monotone operators on domains given by
Linear Minimization Oracles. The techniques we develope can be viewed as a
substantial extension of the proposed in [5] method of nonsmooth convex
minimization over an LMO-represented domain
Non-asymptotic confidence bounds for the optimal value of a stochastic program
We discuss a general approach to building non-asymptotic confidence bounds
for stochastic optimization problems. Our principal contribution is the
observation that a Sample Average Approximation of a problem supplies upper and
lower bounds for the optimal value of the problem which are essentially better
than the quality of the corresponding optimal solutions. At the same time, such
bounds are more reliable than "standard" confidence bounds obtained through the
asymptotic approach. We also discuss bounding the optimal value of MinMax
Stochastic Optimization and stochastically constrained problems. We conclude
with a simulation study illustrating the numerical behavior of the proposed
bounds
Decomposition Techniques for Bilinear Saddle Point Problems and Variational Inequalities with Affine Monotone Operators on Domains Given by Linear Minimization Oracles
The majority of First Order methods for large-scale convex-concave saddle
point problems and variational inequalities with monotone operators are
proximal algorithms which at every iteration need to minimize over problem's
domain X the sum of a linear form and a strongly convex function. To make such
an algorithm practical, X should be proximal-friendly -- admit a strongly
convex function with easy to minimize linear perturbations. As a byproduct, X
admits a computationally cheap Linear Minimization Oracle (LMO) capable to
minimize over X linear forms. There are, however, important situations where a
cheap LMO indeed is available, but X is not proximal-friendly, which motivates
search for algorithms based solely on LMO's. For smooth convex minimization,
there exists a classical LMO-based algorithm -- Conditional Gradient. In
contrast, known to us LMO-based techniques for other problems with convex
structure (nonsmooth convex minimization, convex-concave saddle point problems,
even as simple as bilinear ones, and variational inequalities with monotone
operators, even as simple as affine) are quite recent and utilize common
approach based on Fenchel-type representations of the associated
objectives/vector fields. The goal of this paper is to develop an alternative
(and seemingly much simpler) LMO-based decomposition techniques for bilinear
saddle point problems and for variational inequalities with affine monotone
operators
Nonparametric estimation by convex programming
The problem we concentrate on is as follows: given (1) a convex compact set
in , an affine mapping , a parametric family
of probability densities and (2) i.i.d. observations
of the random variable , distributed with the density
for some (unknown) , estimate the value of a given linear form
at . For several families with no additional
assumptions on and , we develop computationally efficient estimation
routines which are minimax optimal, within an absolute constant factor. We then
apply these routines to recovering itself in the Euclidean norm.Comment: Published in at http://dx.doi.org/10.1214/08-AOS654 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …