33 research outputs found
Limited-memory BFGS Systems with Diagonal Updates
In this paper, we investigate a formula to solve systems of the form (B +
{\sigma}I)x = y, where B is a limited-memory BFGS quasi-Newton matrix and
{\sigma} is a positive constant. These types of systems arise naturally in
large-scale optimization such as trust-region methods as well as
doubly-augmented Lagrangian methods. We show that provided a simple condition
holds on B_0 and \sigma, the system (B + \sigma I)x = y can be solved via a
recursion formula that requies only vector inner products. This formula has
complexity M^2n, where M is the number of L-BFGS updates and n >> M is the
dimension of x
On Solving L-SR1 Trust-Region Subproblems
In this article, we consider solvers for large-scale trust-region subproblems
when the quadratic model is defined by a limited-memory symmetric rank-one
(L-SR1) quasi-Newton matrix. We propose a solver that exploits the compact
representation of L-SR1 matrices. Our approach makes use of both an orthonormal
basis for the eigenspace of the L-SR1 matrix and the Sherman-Morrison-Woodbury
formula to compute global solutions to trust-region subproblems. To compute the
optimal Lagrange multiplier for the trust-region constraint, we use Newton's
method with a judicious initial guess that does not require safeguarding. A
crucial property of this solver is that it is able to compute high-accuracy
solutions even in the so-called hard case. Additionally, the optimal solution
is determined directly by formula, not iteratively. Numerical experiments
demonstrate the effectiveness of this solver.Comment: 2015-0
Compressed sensing performance bounds under Poisson noise
This paper describes performance bounds for compressed sensing (CS) where the
underlying sparse or compressible (sparsely approximable) signal is a vector of
nonnegative intensities whose measurements are corrupted by Poisson noise. In
this setting, standard CS techniques cannot be applied directly for several
reasons. First, the usual signal-independent and/or bounded noise models do not
apply to Poisson noise, which is non-additive and signal-dependent. Second, the
CS matrices typically considered are not feasible in real optical systems
because they do not adhere to important constraints, such as nonnegativity and
photon flux preservation. Third, the typical -- minimization
leads to overfitting in the high-intensity regions and oversmoothing in the
low-intensity areas. In this paper, we describe how a feasible positivity- and
flux-preserving sensing matrix can be constructed, and then analyze the
performance of a CS reconstruction approach for Poisson data that minimizes an
objective function consisting of a negative Poisson log likelihood term and a
penalty term which measures signal sparsity. We show that, as the overall
intensity of the underlying signal increases, an upper bound on the
reconstruction error decays at an appropriate rate (depending on the
compressibility of the signal), but that for a fixed signal intensity, the
signal-dependent part of the error bound actually grows with the number of
measurements or sensors. This surprising fact is both proved theoretically and
justified based on physical intuition.Comment: 12 pages, 3 pdf figures; accepted for publication in IEEE
Transactions on Signal Processin