20,596 research outputs found

    Numerical Analysis of Sparse Initial Data Identification for Parabolic Problems

    Full text link
    In this paper we consider a problem of initial data identification from the final time observation for homogeneous parabolic problems. It is well-known that such problems are exponentially ill-posed due to the strong smoothing property of parabolic equations. We are interested in a situation when the initial data we intend to recover is known to be sparse, i.e. its support has Lebesgue measure zero. We formulate the problem as an optimal control problem and incorporate the information on the sparsity of the unknown initial data into the structure of the objective functional. In particular, we are looking for the control variable in the space of regular Borel measures and use the corresponding norm as a regularization term in the objective functional. This leads to a convex but non-smooth optimization problem. For the discretization we use continuous piecewise linear finite elements in space and discontinuous Galerkin finite elements of arbitrary degree in time. For the general case we establish error estimates for the state variable. Under a certain structural assumption, we show that the control variable consists of a finite linear combination of Dirac measures. For this case we obtain error estimates for the locations of Dirac measures as well as for the corresponding coefficients. The key to the numerical analysis are the sharp smoothing type pointwise finite element error estimates for homogeneous parabolic problems, which are of independent interest. Moreover, we discuss an efficient algorithmic approach to the problem and show several numerical experiments illustrating our theoretical results.Comment: 43 pages, 10 figure

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    A fully discrete framework for the adaptive solution of inverse problems

    Get PDF
    We investigate and contrast the differences between the discretize-then-differentiate and differentiate-then-discretize approaches to the numerical solution of parameter estimation problems. The former approach is attractive in practice due to the use of automatic differentiation for the generation of the dual and optimality equations in the first-order KKT system. The latter strategy is more versatile, in that it allows one to formulate efficient mesh-independent algorithms over suitably chosen function spaces. However, it is significantly more difficult to implement, since automatic code generation is no longer an option. The starting point is a classical elliptic inverse problem. An a priori error analysis for the discrete optimality equation shows consistency and stability are not inherited automatically from the primal discretization. Similar to the concept of dual consistency, We introduce the concept of optimality consistency. However, the convergence properties can be restored through suitable consistent modifications of the target functional. Numerical tests confirm the theoretical convergence order for the optimal solution. We then derive a posteriori error estimates for the infinite dimensional optimal solution error, through a suitably chosen error functional. This estimates are constructed using second order derivative information for the target functional. For computational efficiency, the Hessian is replaced by a low order BFGS approximation. The efficiency of the error estimator is confirmed by a numerical experiment with multigrid optimization
    • …
    corecore