163,045 research outputs found

    Inverse problems for abstract evolution equations II: higher order differentiability for viscoelasticity

    Get PDF
    Abstract. In this follow-up of [Inverse Problems 32 (2016) 085001] we generalize our previous abstract results so that they can be applied to the viscoelastic wave equation which serves as a forward model for full waveform inversion (FWI) in seismic imaging including dispersion and attenuation. FWI is the nonlinear inverse problem of identifying parameter functions of the viscoelastic wave equation from measurements of the reflected wave field. Here we rigorously derive rather explicit analytic expressions for the Fréchet derivative and its adjoint (adjoint state method) of the underlying parameter-to-solution map. These quantities enter crucially Newton-like gradient decent solvers for FWI. Moreover, we provide the second Fréchet derivative and a related adjoint as ingredients to second degree solvers

    Boundary Approximation Methods For Some Free And Moving Boundary Problems

    Get PDF
    Numerical methods for a class of free and moving boundary problems are considered. The class involves the solution of Laplace\u27s equation on a domain which is changing shape with time. The position of the boundary is described by an evolution equation. With the time fixed, a boundary approximation method is employed to solve the potential problem. The boundary location at the next time is determined from the evolution equation using standard techniques and the process is repeated.;Two boundary methods are examined. Both are characterized by representing the approximate solution of the potential problem as a series of known basis functions, chosen from a complete set of particular solutions to the Laplace equation. In the first approach, the parameters, to be determined from the boundary data, appear linearly in the trial solution. The basis functions are closely related to the well studied harmonic polynomials and this permits an extensive analysis of the linear method. In particular, convergence of the method is demonstrated and some estimates on the degree of convergence are derived. In the second approach, the parameters appear nonlinearly. This approach is new, but derives from classical results on complex rational approximation and may be interpreted as an acceleration of the convergence of the linear technique.;The linear method is applied to a number of electrochemical machining examples and performs well for relatively smooth boundaries. A nonlinear approach is tested on the inverse machining problem with excellent results.;Both the linear and nonlinear methods are applied to several challenging examples of Hele-Shaw flow. In all instances, the nonlinear scheme outperforms the linear. The ease of programming, efficiency and concomitant accuracy of the nonlinear scheme make it an attractive choice for the numerical integration of a class of free and moving boundary problems

    Nonlinear estimation for linear inverse problems with error in the operator

    Full text link
    We study two nonlinear methods for statistical linear inverse problems when the operator is not known. The two constructions combine Galerkin regularization and wavelet thresholding. Their performances depend on the underlying structure of the operator, quantified by an index of sparsity. We prove their rate-optimality and adaptivity properties over Besov classes.Comment: Published in at http://dx.doi.org/10.1214/009053607000000721 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    On optimal solution error covariances in variational data assimilation problems

    Get PDF
    The problem of variational data assimilation for a nonlinear evolution model is formulated as an optimal control problem to find unknown parameters such as distributed model coefficients or boundary conditions. The equation for the optimal solution error is derived through the errors of the input data (background and observation errors), and the optimal solution error covariance operator through the input data error covariance operators, respectively. The quasi-Newton BFGS algorithm is adapted to construct the covariance matrix of the optimal solution error using the inverse Hessian of an auxiliary data assimilation problem based on the tangent linear model constraints. Preconditioning is applied to reduce the number of iterations required by the BFGS algorithm to build a quasi-Newton approximation of the inverse Hessian. Numerical examples are presented for the one-dimensional convection-diffusion model
    corecore