6,785 research outputs found

    Boundary value processes: estimation and identification

    Get PDF
    Recent results obtained for boundary value processes and the associated smoothing and identification problems are presented in this paper. Both lumped and distributed parameter models are considered. Some open problems are discussed and the fundamental mathematical difficulties that arise in studying nonlinear extensions of the proposed models are mentioned

    Gradient-Based Estimation of Uncertain Parameters for Elliptic Partial Differential Equations

    Full text link
    This paper addresses the estimation of uncertain distributed diffusion coefficients in elliptic systems based on noisy measurements of the model output. We formulate the parameter identification problem as an infinite dimensional constrained optimization problem for which we establish existence of minimizers as well as first order necessary conditions. A spectral approximation of the uncertain observations allows us to estimate the infinite dimensional problem by a smooth, albeit high dimensional, deterministic optimization problem, the so-called finite noise problem in the space of functions with bounded mixed derivatives. We prove convergence of finite noise minimizers to the appropriate infinite dimensional ones, and devise a stochastic augmented Lagrangian method for locating these numerically. Lastly, we illustrate our method with three numerical examples

    Robust Optimization of PDEs with Random Coefficients Using a Multilevel Monte Carlo Method

    Full text link
    This paper addresses optimization problems constrained by partial differential equations with uncertain coefficients. In particular, the robust control problem and the average control problem are considered for a tracking type cost functional with an additional penalty on the variance of the state. The expressions for the gradient and Hessian corresponding to either problem contain expected value operators. Due to the large number of uncertainties considered in our model, we suggest to evaluate these expectations using a multilevel Monte Carlo (MLMC) method. Under mild assumptions, it is shown that this results in the gradient and Hessian corresponding to the MLMC estimator of the original cost functional. Furthermore, we show that the use of certain correlated samples yields a reduction in the total number of samples required. Two optimization methods are investigated: the nonlinear conjugate gradient method and the Newton method. For both, a specific algorithm is provided that dynamically decides which and how many samples should be taken in each iteration. The cost of the optimization up to some specified tolerance Ï„\tau is shown to be proportional to the cost of a gradient evaluation with requested root mean square error Ï„\tau. The algorithms are tested on a model elliptic diffusion problem with lognormal diffusion coefficient. An additional nonlinear term is also considered.Comment: This work was presented at the IMG 2016 conference (Dec 5 - Dec 9, 2016), at the Copper Mountain conference (Mar 26 - Mar 30, 2017), and at the FrontUQ conference (Sept 5 - Sept 8, 2017
    • …
    corecore