1,576 research outputs found
A Duality Approach to Error Estimation for Variational Inequalities
Motivated by problems in contact mechanics, we propose a duality approach for
computing approximations and associated a posteriori error bounds to solutions
of variational inequalities of the first kind. The proposed approach improves
upon existing methods introduced in the context of the reduced basis method in
two ways. First, it provides sharp a posteriori error bounds which mimic the
rate of convergence of the RB approximation. Second, it enables a full
offline-online computational decomposition in which the online cost is
completely independent of the dimension of the original (high-dimensional)
problem. Numerical results comparing the performance of the proposed and
existing approaches illustrate the superiority of the duality approach in cases
where the dimension of the full problem is high.Comment: 21 pages, 8 figure
Reduced basis methods for pricing options with the Black-Scholes and Heston model
In this paper, we present a reduced basis method for pricing European and
American options based on the Black-Scholes and Heston model. To tackle each
model numerically, we formulate the problem in terms of a time dependent
variational equality or inequality. We apply a suitable reduced basis approach
for both types of options. The characteristic ingredients used in the method
are a combined POD-Greedy and Angle-Greedy procedure for the construction of
the primal and dual reduced spaces. Analytically, we prove the reproduction
property of the reduced scheme and derive a posteriori error estimators.
Numerical examples are provided, illustrating the approximation quality and
convergence of our approach for the different option pricing models. Also, we
investigate the reliability and effectivity of the error estimators.Comment: 25 pages, 27 figure
Maximum-a-posteriori estimation with Bayesian confidence regions
Solutions to inverse problems that are ill-conditioned or ill-posed may have
significant intrinsic uncertainty. Unfortunately, analysing and quantifying
this uncertainty is very challenging, particularly in high-dimensional
problems. As a result, while most modern mathematical imaging methods produce
impressive point estimation results, they are generally unable to quantify the
uncertainty in the solutions delivered. This paper presents a new general
methodology for approximating Bayesian high-posterior-density credibility
regions in inverse problems that are convex and potentially very
high-dimensional. The approximations are derived by using recent concentration
of measure results related to information theory for log-concave random
vectors. A remarkable property of the approximations is that they can be
computed very efficiently, even in large-scale problems, by using standard
convex optimisation techniques. In particular, they are available as a
by-product in problems solved by maximum-a-posteriori estimation. The
approximations also have favourable theoretical properties, namely they
outer-bound the true high-posterior-density credibility regions, and they are
stable with respect to model dimension. The proposed methodology is illustrated
on two high-dimensional imaging inverse problems related to tomographic
reconstruction and sparse deconvolution, where the approximations are used to
perform Bayesian hypothesis tests and explore the uncertainty about the
solutions, and where proximal Markov chain Monte Carlo algorithms are used as
benchmark to compute exact credible regions and measure the approximation
error
On linear convergence of a distributed dual gradient algorithm for linearly constrained separable convex problems
In this paper we propose a distributed dual gradient algorithm for minimizing
linearly constrained separable convex problems and analyze its rate of
convergence. In particular, we prove that under the assumption of strong
convexity and Lipshitz continuity of the gradient of the primal objective
function we have a global error bound type property for the dual problem. Using
this error bound property we devise a fully distributed dual gradient scheme,
i.e. a gradient scheme based on a weighted step size, for which we derive
global linear rate of convergence for both dual and primal suboptimality and
for primal feasibility violation. Many real applications, e.g. distributed
model predictive control, network utility maximization or optimal power flow,
can be posed as linearly constrained separable convex problems for which dual
gradient type methods from literature have sublinear convergence rate. In the
present paper we prove for the first time that in fact we can achieve linear
convergence rate for such algorithms when they are used for solving these
applications. Numerical simulations are also provided to confirm our theory.Comment: 14 pages, 4 figures, submitted to Automatica Journal, February 2014.
arXiv admin note: substantial text overlap with arXiv:1401.4398. We revised
the paper, adding more simulations and checking for typo
- …