6,126 research outputs found

    Parameter Estimation via Conditional Expectation --- A Bayesian Inversion

    Get PDF
    When a mathematical or computational model is used to analyse some system, it is usual that some parameters resp.\ functions or fields in the model are not known, and hence uncertain. These parametric quantities are then identified by actual observations of the response of the real system. In a probabilistic setting, Bayes's theory is the proper mathematical background for this identification process. The possibility of being able to compute a conditional expectation turns out to be crucial for this purpose. We show how this theoretical background can be used in an actual numerical procedure, and shortly discuss various numerical approximations

    Nonlinear Attitude Filtering: A Comparison Study

    Get PDF
    This paper contains a concise comparison of a number of nonlinear attitude filtering methods that have attracted attention in the robotics and aviation literature. With the help of previously published surveys and comparison studies, the vast literature on the subject is narrowed down to a small pool of competitive attitude filters. Amongst these filters is a second-order optimal minimum-energy filter recently proposed by the authors. Easily comparable discretized unit quaternion implementations of the selected filters are provided. We conduct a simulation study and compare the transient behaviour and asymptotic convergence of these filters in two scenarios with different initialization and measurement errors inspired by applications in unmanned aerial robotics and space flight. The second-order optimal minimum-energy filter is shown to have the best performance of all filters, including the industry standard multiplicative extended Kalman filter (MEKF)

    Asymptotic forecast uncertainty and the unstable subspace in the presence of additive model error

    Get PDF
    It is well understood that dynamic instability is among the primary drivers of forecast uncertainty in chaotic, physical systems. Data assimilation techniques have been designed to exploit this phenomenon, reducing the effective dimension of the data assimilation problem to the directions of rapidly growing errors. Recent mathematical work has, moreover, provided formal proofs of the central hypothesis of the assimilation in the unstable subspace methodology of Anna Trevisan and her collaborators: for filters and smoothers in perfect, linear, Gaussian models, the distribution of forecast errors asymptotically conforms to the unstable-neutral subspace. Specifically, the column span of the forecast and posterior error covariances asymptotically align with the span of backward Lyapunov vectors with nonnegative exponents. Earlier mathematical studies have focused on perfect models, and this current work now explores the relationship between dynamical instability, the precision of observations, and the evolution of forecast error in linear models with additive model error. We prove bounds for the asymptotic uncertainty, explicitly relating the rate of dynamical expansion, model precision, and observational accuracy. Formalizing this relationship, we provide a novel, necessary criterion for the boundedness of forecast errors. Furthermore, we numerically explore the relationship between observational design, dynamical instability, and filter boundedness. Additionally, we include a detailed introduction to the multiplicative ergodic theorem and to the theory and construction of Lyapunov vectors. While forecast error in the stable subspace may not generically vanish, we show that even without filtering, uncertainty remains uniformly bounded due its dynamical dissipation. However, the continuous reinjection of uncertainty from model errors may be excited by transient instabilities in the stable modes of high variance, rendering forecast uncertainty impractically large. In the context of ensemble data assimilation, this requires rectifying the rank of the ensemble-based gain to account for the growth of uncertainty beyond the unstable and neutral subspace, additionally correcting stable modes with frequent occurrences of positive local Lyapunov exponents that excite model errors

    Rational minimax approximation via adaptive barycentric representations

    Get PDF
    Computing rational minimax approximations can be very challenging when there are singularities on or near the interval of approximation - precisely the case where rational functions outperform polynomials by a landslide. We show that far more robust algorithms than previously available can be developed by making use of rational barycentric representations whose support points are chosen in an adaptive fashion as the approximant is computed. Three variants of this barycentric strategy are all shown to be powerful: (1) a classical Remez algorithm, (2) a "AAA-Lawson" method of iteratively reweighted least-squares, and (3) a differential correction algorithm. Our preferred combination, implemented in the Chebfun MINIMAX code, is to use (2) in an initial phase and then switch to (1) for generically quadratic convergence. By such methods we can calculate approximations up to type (80, 80) of x|x| on [1,1][-1, 1] in standard 16-digit floating point arithmetic, a problem for which Varga, Ruttan, and Carpenter required 200-digit extended precision.Comment: 29 pages, 11 figure
    corecore