14,561 research outputs found
Revision of TR-09-25: A Hybrid Variational/Ensemble Filter Approach to Data Assimilation
Two families of methods are widely used in data assimilation: the
four dimensional variational (4D-Var) approach, and the ensemble Kalman filter
(EnKF) approach. The two families have been developed largely through parallel
research efforts. Each method has its advantages and disadvantages. It is of
interest to develop hybrid data assimilation
algorithms that can combine the relative strengths of the two approaches.
This paper proposes a subspace approach to investigate the theoretical equivalence between the suboptimal
4D-Var method (where only a small number of optimization iterations are
performed) and the practical EnKF method (where only a small number of ensemble
members are used) in a linear Gaussian setting. The analysis motivates a new
hybrid algorithm: the optimization directions obtained from a short window
4D-Var run are used to construct the EnKF initial ensemble.
The proposed hybrid method is computationally less expensive than a full
4D-Var, as only short assimilation windows are considered. The hybrid method has the potential to
perform better than the regular EnKF due to its look-ahead property.
Numerical results
show that the proposed hybrid ensemble filter method performs better than the
regular EnKF method for both linear and nonlinear test problems
Projected Newton Method for noise constrained Tikhonov regularization
Tikhonov regularization is a popular approach to obtain a meaningful solution
for ill-conditioned linear least squares problems. A relatively simple way of
choosing a good regularization parameter is given by Morozov's discrepancy
principle. However, most approaches require the solution of the Tikhonov
problem for many different values of the regularization parameter, which is
computationally demanding for large scale problems. We propose a new and
efficient algorithm which simultaneously solves the Tikhonov problem and finds
the corresponding regularization parameter such that the discrepancy principle
is satisfied. We achieve this by formulating the problem as a nonlinear system
of equations and solving this system using a line search method. We obtain a
good search direction by projecting the problem onto a low dimensional Krylov
subspace and computing the Newton direction for the projected problem. This
projected Newton direction, which is significantly less computationally
expensive to calculate than the true Newton direction, is then combined with a
backtracking line search to obtain a globally convergent algorithm, which we
refer to as the Projected Newton method. We prove convergence of the algorithm
and illustrate the improved performance over current state-of-the-art solvers
with some numerical experiments
Progressive construction of a parametric reduced-order model for PDE-constrained optimization
An adaptive approach to using reduced-order models as surrogates in
PDE-constrained optimization is introduced that breaks the traditional
offline-online framework of model order reduction. A sequence of optimization
problems constrained by a given Reduced-Order Model (ROM) is defined with the
goal of converging to the solution of a given PDE-constrained optimization
problem. For each reduced optimization problem, the constraining ROM is trained
from sampling the High-Dimensional Model (HDM) at the solution of some of the
previous problems in the sequence. The reduced optimization problems are
equipped with a nonlinear trust-region based on a residual error indicator to
keep the optimization trajectory in a region of the parameter space where the
ROM is accurate. A technique for incorporating sensitivities into a
Reduced-Order Basis (ROB) is also presented, along with a methodology for
computing sensitivities of the reduced-order model that minimizes the distance
to the corresponding HDM sensitivity, in a suitable norm. The proposed reduced
optimization framework is applied to subsonic aerodynamic shape optimization
and shown to reduce the number of queries to the HDM by a factor of 4-5,
compared to the optimization problem solved using only the HDM, with errors in
the optimal solution far less than 0.1%
A convex formulation for hyperspectral image superresolution via subspace-based regularization
Hyperspectral remote sensing images (HSIs) usually have high spectral
resolution and low spatial resolution. Conversely, multispectral images (MSIs)
usually have low spectral and high spatial resolutions. The problem of
inferring images which combine the high spectral and high spatial resolutions
of HSIs and MSIs, respectively, is a data fusion problem that has been the
focus of recent active research due to the increasing availability of HSIs and
MSIs retrieved from the same geographical area.
We formulate this problem as the minimization of a convex objective function
containing two quadratic data-fitting terms and an edge-preserving regularizer.
The data-fitting terms account for blur, different resolutions, and additive
noise. The regularizer, a form of vector Total Variation, promotes
piecewise-smooth solutions with discontinuities aligned across the
hyperspectral bands.
The downsampling operator accounting for the different spatial resolutions,
the non-quadratic and non-smooth nature of the regularizer, and the very large
size of the HSI to be estimated lead to a hard optimization problem. We deal
with these difficulties by exploiting the fact that HSIs generally "live" in a
low-dimensional subspace and by tailoring the Split Augmented Lagrangian
Shrinkage Algorithm (SALSA), which is an instance of the Alternating Direction
Method of Multipliers (ADMM), to this optimization problem, by means of a
convenient variable splitting. The spatial blur and the spectral linear
operators linked, respectively, with the HSI and MSI acquisition processes are
also estimated, and we obtain an effective algorithm that outperforms the
state-of-the-art, as illustrated in a series of experiments with simulated and
real-life data.Comment: IEEE Trans. Geosci. Remote Sens., to be publishe
Grid-free compressive beamforming
The direction-of-arrival (DOA) estimation problem involves the localization
of a few sources from a limited number of observations on an array of sensors,
thus it can be formulated as a sparse signal reconstruction problem and solved
efficiently with compressive sensing (CS) to achieve high-resolution imaging.
On a discrete angular grid, the CS reconstruction degrades due to basis
mismatch when the DOAs do not coincide with the angular directions on the grid.
To overcome this limitation, a continuous formulation of the DOA problem is
employed and an optimization procedure is introduced, which promotes sparsity
on a continuous optimization variable. The DOA estimation problem with
infinitely many unknowns, i.e., source locations and amplitudes, is solved over
a few optimization variables with semidefinite programming. The grid-free CS
reconstruction provides high-resolution imaging even with non-uniform arrays,
single-snapshot data and under noisy conditions as demonstrated on experimental
towed array data.Comment: 14 pages, 8 figures, journal pape
- …