143,201 research outputs found

    ROAM: a Radial-basis-function Optimization Approximation Method for diagnosing the three-dimensional coronal magnetic field

    Get PDF
    The Coronal Multichannel Polarimeter (CoMP) routinely performs coronal polarimetric measurements using the Fe XIII 10747 AËš\AA and 10798 AËš\AA lines, which are sensitive to the coronal magnetic field. However, inverting such polarimetric measurements into magnetic field data is a difficult task because the corona is optically thin at these wavelengths and the observed signal is therefore the integrated emission of all the plasma along the line of sight. To overcome this difficulty, we take on a new approach that combines a parameterized 3D magnetic field model with forward modeling of the polarization signal. For that purpose, we develop a new, fast and efficient, optimization method for model-data fitting: the Radial-basis-functions Optimization Approximation Method (ROAM). Model-data fitting is achieved by optimizing a user-specified log-likelihood function that quantifies the differences between the observed polarization signal and its synthetic/predicted analogue. Speed and efficiency are obtained by combining sparse evaluation of the magnetic model with radial-basis-function (RBF) decomposition of the log-likelihood function. The RBF decomposition provides an analytical expression for the log-likelihood function that is used to inexpensively estimate the set of parameter values optimizing it. We test and validate ROAM on a synthetic test bed of a coronal magnetic flux rope and show that it performs well with a significantly sparse sample of the parameter space. We conclude that our optimization method is well-suited for fast and efficient model-data fitting and can be exploited for converting coronal polarimetric measurements, such as the ones provided by CoMP, into coronal magnetic field data.Comment: 23 pages, 12 figures, accepted in Frontiers in Astronomy and Space Science

    The heat modulated infinite dimensional Heston model and its numerical approximation

    Full text link
    The HEat modulated Infinite DImensional Heston (HEIDIH) model and its numerical approximation are introduced and analyzed. This model falls into the general framework of infinite dimensional Heston stochastic volatility models of (F.E. Benth, I.C. Simonsen '18), introduced for the pricing of forward contracts. The HEIDIH model consists of a one-dimensional stochastic advection equation coupled with a stochastic volatility process, defined as a Cholesky-type decomposition of the tensor product of a Hilbert-space valued Ornstein-Uhlenbeck process, the mild solution to the stochastic heat equation on the real half-line. The advection and heat equations are driven by independent space-time Gaussian processes which are white in time and colored in space, with the latter covariance structure expressed by two different kernels. First, a class of weight-stationary kernels are given, under which regularity results for the HEIDIH model in fractional Sobolev spaces are formulated. In particular, the class includes weighted Mat\'ern kernels. Second, numerical approximation of the model is considered. An error decomposition formula, pointwise in space and time, for a finite-difference scheme is proven. For a special case, essentially sharp convergence rates are obtained when this is combined with a fully discrete finite element approximation of the stochastic heat equation. The analysis takes into account a localization error, a pointwise-in-space finite element discretization error and an error stemming from the noise being sampled pointwise in space. The rates obtained in the analysis are higher than what would be obtained using a standard Sobolev embedding technique. Numerical simulations illustrate the results.Comment: 35 pages, 7 figure

    Non-linear model reduction for uncertainty quantification in large-scale inverse problems

    Full text link
    We present a model reduction approach to the solution of large-scale statistical inverse problems in a Bayesian inference setting. A key to the model reduction is an efficient representation of the non-linear terms in the reduced model. To achieve this, we present a formulation that employs masked projection of the discrete equations; that is, we compute an approximation of the non-linear term using a select subset of interpolation points. Further, through this formulation we show similarities among the existing techniques of gappy proper orthogonal decomposition, missing point estimation, and empirical interpolation via coefficient-function approximation. The resulting model reduction methodology is applied to a highly non-linear combustion problem governed by an advection–diffusion-reaction partial differential equation (PDE). Our reduced model is used as a surrogate for a finite element discretization of the non-linear PDE within the Markov chain Monte Carlo sampling employed by the Bayesian inference approach. In two spatial dimensions, we show that this approach yields accurate results while reducing the computational cost by several orders of magnitude. For the full three-dimensional problem, a forward solve using a reduced model that has high fidelity over the input parameter space is more than two million times faster than the full-order finite element model, making tractable the solution of the statistical inverse problem that would otherwise require many years of CPU time. Copyright © 2009 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/65031/1/2746_ftp.pd

    Nonlinear Model Reduction for Uncertainty Quantification in Large-Scale Inverse Problems

    Get PDF
    We present a model reduction approach to the solution of large-scale statistical inverse problems in a Bayesian inference setting. A key to the model reduction is an efficient representation of the non-linear terms in the reduced model. To achieve this, we present a formulation that employs masked projection of the discrete equations; that is, we compute an approximation of the non-linear term using a select subset of interpolation points. Further, through this formulation we show similarities among the existing techniques of gappy proper orthogonal decomposition, missing point estimation, and empirical interpolation via coefficient-function approximation. The resulting model reduction methodology is applied to a highly non-linear combustion problem governed by an advection–diffusion-reaction partial differential equation (PDE). Our reduced model is used as a surrogate for a finite element discretization of the non-linear PDE within the Markov chain Monte Carlo sampling employed by the Bayesian inference approach. In two spatial dimensions, we show that this approach yields accurate results while reducing the computational cost by several orders of magnitude. For the full three-dimensional problem, a forward solve using a reduced model that has high fidelity over the input parameter space is more than two million times faster than the full-order finite element model, making tractable the solution of the statistical inverse problem that would otherwise require many years of CPU time.MIT-Singapore Alliance. Computational Engineering ProgrammeUnited States. Air Force Office of Scientific Research (Contract Nos. FA9550-06-0271)National Science Foundation (U.S.) (Grant No. CNS-0540186)National Science Foundation (U.S.) (Grant No. CNS-0540372)Caja Madrid Foundation (Graduate Fellowship

    Distributed and Parallel Algorithms for Set Cover Problems with Small Neighborhood Covers

    Get PDF
    In this paper, we study a class of set cover problems that satisfy a special property which we call the {\em small neighborhood cover} property. This class encompasses several well-studied problems including vertex cover, interval cover, bag interval cover and tree cover. We design unified distributed and parallel algorithms that can handle any set cover problem falling under the above framework and yield constant factor approximations. These algorithms run in polylogarithmic communication rounds in the distributed setting and are in NC, in the parallel setting.Comment: Full version of FSTTCS'13 pape

    Inference via low-dimensional couplings

    Full text link
    We investigate the low-dimensional structure of deterministic transformations between random variables, i.e., transport maps between probability measures. In the context of statistics and machine learning, these transformations can be used to couple a tractable "reference" measure (e.g., a standard Gaussian) with a target measure of interest. Direct simulation from the desired measure can then be achieved by pushing forward reference samples through the map. Yet characterizing such a map---e.g., representing and evaluating it---grows challenging in high dimensions. The central contribution of this paper is to establish a link between the Markov properties of the target measure and the existence of low-dimensional couplings, induced by transport maps that are sparse and/or decomposable. Our analysis not only facilitates the construction of transformations in high-dimensional settings, but also suggests new inference methodologies for continuous non-Gaussian graphical models. For instance, in the context of nonlinear state-space models, we describe new variational algorithms for filtering, smoothing, and sequential parameter inference. These algorithms can be understood as the natural generalization---to the non-Gaussian case---of the square-root Rauch-Tung-Striebel Gaussian smoother.Comment: 78 pages, 25 figure

    A unified wavelet-based modelling framework for non-linear system identification: the WANARX model structure

    Get PDF
    A new unified modelling framework based on the superposition of additive submodels, functional components, and wavelet decompositions is proposed for non-linear system identification. A non-linear model, which is often represented using a multivariate non-linear function, is initially decomposed into a number of functional components via the wellknown analysis of variance (ANOVA) expression, which can be viewed as a special form of the NARX (non-linear autoregressive with exogenous inputs) model for representing dynamic input–output systems. By expanding each functional component using wavelet decompositions including the regular lattice frame decomposition, wavelet series and multiresolution wavelet decompositions, the multivariate non-linear model can then be converted into a linear-in-theparameters problem, which can be solved using least-squares type methods. An efficient model structure determination approach based upon a forward orthogonal least squares (OLS) algorithm, which involves a stepwise orthogonalization of the regressors and a forward selection of the relevant model terms based on the error reduction ratio (ERR), is employed to solve the linear-in-the-parameters problem in the present study. The new modelling structure is referred to as a wavelet-based ANOVA decomposition of the NARX model or simply WANARX model, and can be applied to represent high-order and high dimensional non-linear systems
    • …
    corecore