12,297 research outputs found

    Estimating Nuisance Parameters in Inverse Problems

    Full text link
    Many inverse problems include nuisance parameters which, while not of direct interest, are required to recover primary parameters. Structure present in these problems allows efficient optimization strategies - a well known example is variable projection, where nonlinear least squares problems which are linear in some parameters can be very efficiently optimized. In this paper, we extend the idea of projecting out a subset over the variables to a broad class of maximum likelihood (ML) and maximum a posteriori likelihood (MAP) problems with nuisance parameters, such as variance or degrees of freedom. As a result, we are able to incorporate nuisance parameter estimation into large-scale constrained and unconstrained inverse problem formulations. We apply the approach to a variety of problems, including estimation of unknown variance parameters in the Gaussian model, degree of freedom (d.o.f.) parameter estimation in the context of robust inverse problems, automatic calibration, and optimal experimental design. Using numerical examples, we demonstrate improvement in recovery of primary parameters for several large- scale inverse problems. The proposed approach is compatible with a wide variety of algorithms and formulations, and its implementation requires only minor modifications to existing algorithms.Comment: 16 pages, 5 figure

    Energy preserving model order reduction of the nonlinear Schr\"odinger equation

    Get PDF
    An energy preserving reduced order model is developed for two dimensional nonlinear Schr\"odinger equation (NLSE) with plane wave solutions and with an external potential. The NLSE is discretized in space by the symmetric interior penalty discontinuous Galerkin (SIPG) method. The resulting system of Hamiltonian ordinary differential equations are integrated in time by the energy preserving average vector field (AVF) method. The mass and energy preserving reduced order model (ROM) is constructed by proper orthogonal decomposition (POD) Galerkin projection. The nonlinearities are computed for the ROM efficiently by discrete empirical interpolation method (DEIM) and dynamic mode decomposition (DMD). Preservation of the semi-discrete energy and mass are shown for the full order model (FOM) and for the ROM which ensures the long term stability of the solutions. Numerical simulations illustrate the preservation of the energy and mass in the reduced order model for the two dimensional NLSE with and without the external potential. The POD-DMD makes a remarkable improvement in computational speed-up over the POD-DEIM. Both methods approximate accurately the FOM, whereas POD-DEIM is more accurate than the POD-DMD

    Optimization Methods for Inverse Problems

    Full text link
    Optimization plays an important role in solving many inverse problems. Indeed, the task of inversion often either involves or is fully cast as a solution of an optimization problem. In this light, the mere non-linear, non-convex, and large-scale nature of many of these inversions gives rise to some very challenging optimization problems. The inverse problem community has long been developing various techniques for solving such optimization tasks. However, other, seemingly disjoint communities, such as that of machine learning, have developed, almost in parallel, interesting alternative methods which might have stayed under the radar of the inverse problem community. In this survey, we aim to change that. In doing so, we first discuss current state-of-the-art optimization methods widely used in inverse problems. We then survey recent related advances in addressing similar challenges in problems faced by the machine learning community, and discuss their potential advantages for solving inverse problems. By highlighting the similarities among the optimization challenges faced by the inverse problem and the machine learning communities, we hope that this survey can serve as a bridge in bringing together these two communities and encourage cross fertilization of ideas.Comment: 13 page

    Guidance, flight mechanics and trajectory optimization. Volume 11 - Guidance equations for orbital operations

    Get PDF
    Mathematical formulation of guidance equations and solutions for orbital space mission

    Inverse Problems and Data Assimilation

    Full text link
    These notes are designed with the aim of providing a clear and concise introduction to the subjects of Inverse Problems and Data Assimilation, and their inter-relations, together with citations to some relevant literature in this area. The first half of the notes is dedicated to studying the Bayesian framework for inverse problems. Techniques such as importance sampling and Markov Chain Monte Carlo (MCMC) methods are introduced; these methods have the desirable property that in the limit of an infinite number of samples they reproduce the full posterior distribution. Since it is often computationally intensive to implement these methods, especially in high dimensional problems, approximate techniques such as approximating the posterior by a Dirac or a Gaussian distribution are discussed. The second half of the notes cover data assimilation. This refers to a particular class of inverse problems in which the unknown parameter is the initial condition of a dynamical system, and in the stochastic dynamics case the subsequent states of the system, and the data comprises partial and noisy observations of that (possibly stochastic) dynamical system. We will also demonstrate that methods developed in data assimilation may be employed to study generic inverse problems, by introducing an artificial time to generate a sequence of probability measures interpolating from the prior to the posterior
    corecore