3,747 research outputs found
Second order adjoints for solving PDE-constrained optimization problems
Inverse problems are of utmost importance in many fields of science and engineering. In the
variational approach inverse problems are formulated as PDE-constrained optimization problems,
where the optimal estimate of the uncertain parameters is the minimizer of a certain cost
functional subject to the constraints posed by the model equations. The numerical solution
of such optimization problems requires the computation of derivatives of the model output
with respect to model parameters. The first order derivatives of a cost functional (defined
on the model output) with respect to a large number of model parameters can be calculated
efficiently through first order adjoint sensitivity analysis. Second order adjoint models
give second derivative information in the form of matrix-vector products between the Hessian
of the cost functional and user defined vectors. Traditionally, the construction of second
order derivatives for large scale models has been considered too costly. Consequently, data
assimilation applications employ optimization algorithms that use only first order derivative
information, like nonlinear conjugate gradients and quasi-Newton methods.
In this paper we discuss the mathematical foundations of second order adjoint sensitivity
analysis and show that it provides an efficient approach to obtain Hessian-vector products. We
study the benefits of using of second order information in the numerical optimization process
for data assimilation applications. The numerical studies are performed in a twin experiment
setting with a two-dimensional shallow water model. Different scenarios are considered with
different discretization approaches, observation sets, and noise levels. Optimization algorithms
that employ second order derivatives are tested against widely used methods that require
only first order derivatives. Conclusions are drawn regarding the potential benefits and the
limitations of using high-order information in large scale data assimilation problems
Joint Image Reconstruction and Segmentation Using the Potts Model
We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data
A multi-level algorithm for the solution of moment problems
We study numerical methods for the solution of general linear moment
problems, where the solution belongs to a family of nested subspaces of a
Hilbert space. Multi-level algorithms, based on the conjugate gradient method
and the Landweber--Richardson method are proposed that determine the "optimal"
reconstruction level a posteriori from quantities that arise during the
numerical calculations. As an important example we discuss the reconstruction
of band-limited signals from irregularly spaced noisy samples, when the actual
bandwidth of the signal is not available. Numerical examples show the
usefulness of the proposed algorithms
Gradient-based estimation of Manning's friction coefficient from noisy data
We study the numerical recovery of Manning's roughness coefficient for the
diffusive wave approximation of the shallow water equation. We describe a
conjugate gradient method for the numerical inversion. Numerical results for
one-dimensional model are presented to illustrate the feasibility of the
approach. Also we provide a proof of the differentiability of the weak form
with respect to the coefficient as well as the continuity and boundedness of
the linearized operator under reasonable assumptions using the maximal
parabolic regularity theory.Comment: 19 pages, 3 figure
On a continuation approach in Tikhonov regularization and its application in piecewise-constant parameter identification
We present a new approach to convexification of the Tikhonov regularization
using a continuation method strategy. We embed the original minimization
problem into a one-parameter family of minimization problems. Both the penalty
term and the minimizer of the Tikhonov functional become dependent on a
continuation parameter.
In this way we can independently treat two main roles of the regularization
term, which are stabilization of the ill-posed problem and introduction of the
a priori knowledge. For zero continuation parameter we solve a relaxed
regularization problem, which stabilizes the ill-posed problem in a weaker
sense. The problem is recast to the original minimization by the continuation
method and so the a priori knowledge is enforced.
We apply this approach in the context of topology-to-shape geometry
identification, where it allows to avoid the convergence of gradient-based
methods to a local minima. We present illustrative results for magnetic
induction tomography which is an example of PDE constrained inverse problem
- …