350 research outputs found

    Inverse problems for linear hyperbolic equations using mixed formulations

    Get PDF
    We introduce in this document a direct method allowing to solve numerically inverse type problems for linear hyperbolic equations. We first consider the reconstruction of the full solution of the wave equation posed in Ω×(0,T)\Omega\times (0,T) - Ω\Omega a bounded subset of RN\mathbb{R}^N - from a partial distributed observation. We employ a least-squares technique and minimize the L2L^2-norm of the distance from the observation to any solution. Taking the hyperbolic equation as the main constraint of the problem, the optimality conditions are reduced to a mixed formulation involving both the state to reconstruct and a Lagrange multiplier. Under usual geometric optic conditions, we show the well-posedness of this mixed formulation (in particular the inf-sup condition) and then introduce a numerical approximation based on space-time finite elements discretization. We prove the strong convergence of the approximation and then discussed several examples for N=1N=1 and N=2N=2. The problem of the reconstruction of both the state and the source term is also addressed

    Inexact inner-outer Golub-Kahan bidiagonalization method: A relaxation strategy

    Full text link
    We study an inexact inner-outer generalized Golub-Kahan algorithm for the solution of saddle-point problems with a two-times-two block structure. In each outer iteration, an inner system has to be solved which in theory has to be done exactly. Whenever the system is getting large, an inner exact solver is, however, no longer efficient or even feasible and iterative methods must be used. We focus this article on a numerical study showing the influence of the accuracy of an inner iterative solution on the accuracy of the solution of the block system. Emphasis is further given on reducing the computational cost, which is defined as the total number of inner iterations. We develop relaxation techniques intended to dynamically change the inner tolerance for each outer iteration to further minimize the total number of inner iterations. We illustrate our findings on a Stokes problem and validate them on a mixed formulation of the Poisson problem.Comment: 25 pages, 9 figure

    The INTERNODES method for applications in contact mechanics and dedicated preconditioning techniques

    Get PDF
    The mortar finite element method is a well-established method for the numerical solution of partial differential equations on domains displaying non-conforming interfaces. The method is known for its application in computational contact mechanics. However, its implementation remains challenging as it relies on geometrical projections and unconventional quadrature rules. The INTERNODES (INTERpolation for NOn-conforming DEcompositionS) method, instead, could overcome the implementation difficulties thanks to flexible interpolation techniques. Moreover, it was shown to be at least as accurate as the mortar method making it a very promising alternative for solving problems in contact mechanics. Unfortunately, in such situations the method requires solving a sequence of ill-conditioned linear systems. In this paper, preconditioning techniques are designed and implemented for the efficient solution of those linear systems

    Domain Decomposition for Stochastic Optimal Control

    Full text link
    This work proposes a method for solving linear stochastic optimal control (SOC) problems using sum of squares and semidefinite programming. Previous work had used polynomial optimization to approximate the value function, requiring a high polynomial degree to capture local phenomena. To improve the scalability of the method to problems of interest, a domain decomposition scheme is presented. By using local approximations, lower degree polynomials become sufficient, and both local and global properties of the value function are captured. The domain of the problem is split into a non-overlapping partition, with added constraints ensuring C1C^1 continuity. The Alternating Direction Method of Multipliers (ADMM) is used to optimize over each domain in parallel and ensure convergence on the boundaries of the partitions. This results in improved conditioning of the problem and allows for much larger and more complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201
    corecore