29,503 research outputs found

    Strong convergence rates of probabilistic integrators for ordinary differential equations

    Get PDF
    Probabilistic integration of a continuous dynamical system is a way of systematically introducing model error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al.\ (\textit{Stat.\ Comput.}, 2017), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially-bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.Comment: 25 page

    The Optimal Uncertainty Algorithm in the Mystic Framework

    Get PDF
    We have recently proposed a rigorous framework for Uncertainty Quantification (UQ) in which UQ objectives and assumption/information set are brought into the forefront, providing a framework for the communication and comparison of UQ results. In particular, this framework does not implicitly impose inappropriate assumptions nor does it repudiate relevant information. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that given a set of assumptions and information, there exist bounds on uncertainties obtained as values of optimization problems and that these bounds are optimal. It provides a uniform environment for the optimal solution of the problems of validation, certification, experimental design, reduced order modeling, prediction, extrapolation, all under aleatoric and epistemic uncertainties. OUQ optimization problems are extremely large, and even though under general conditions they have finite-dimensional reductions, they must often be solved numerically. This general algorithmic framework for OUQ has been implemented in the mystic optimization framework. We describe this implementation, and demonstrate its use in the context of the Caltech surrogate model for hypervelocity impact

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call Optimal Uncertainty Quantification (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as extreme values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions, they have finite-dimensional reductions. As an application, we develop Optimal Concentration Inequalities (OCI) of Hoeffding and McDiarmid type. Surprisingly, contrary to the classical sensitivity analysis paradigm, these results show that uncertainties in input parameters do not necessarily propagate to output uncertainties. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact, suggesting the feasibility of the framework for important complex systems

    Optimal uncertainty quantification for legacy data observations of Lipschitz functions

    Get PDF
    We consider the problem of providing optimal uncertainty quantification (UQ) --- and hence rigorous certification --- for partially-observed functions. We present a UQ framework within which the observations may be small or large in number, and need not carry information about the probability distribution of the system in operation. The UQ objectives are posed as optimization problems, the solutions of which are optimal bounds on the quantities of interest; we consider two typical settings, namely parameter sensitivities (McDiarmid diameters) and output deviation (or failure) probabilities. The solutions of these optimization problems depend non-trivially (even non-monotonically and discontinuously) upon the specified legacy data. Furthermore, the extreme values are often determined by only a few members of the data set; in our principal physically-motivated example, the bounds are determined by just 2 out of 32 data points, and the remainder carry no information and could be neglected without changing the final answer. We propose an analogue of the simplex algorithm from linear programming that uses these observations to offer efficient and rigorous UQ for high-dimensional systems with high-cardinality legacy data. These findings suggest natural methods for selecting optimal (maximally informative) next experiments.Comment: 38 page

    Criticality for the Gehring link problem

    Full text link
    In 1974, Gehring posed the problem of minimizing the length of two linked curves separated by unit distance. This constraint can be viewed as a measure of thickness for links, and the ratio of length over thickness as the ropelength. In this paper we refine Gehring's problem to deal with links in a fixed link-homotopy class: we prove ropelength minimizers exist and introduce a theory of ropelength criticality. Our balance criterion is a set of necessary and sufficient conditions for criticality, based on a strengthened, infinite-dimensional version of the Kuhn--Tucker theorem. We use this to prove that every critical link is C^1 with finite total curvature. The balance criterion also allows us to explicitly describe critical configurations (and presumed minimizers) for many links including the Borromean rings. We also exhibit a surprising critical configuration for two clasped ropes: near their tips the curvature is unbounded and a small gap appears between the two components. These examples reveal the depth and richness hidden in Gehring's problem and our natural extension.Comment: This is the version published by Geometry & Topology on 14 November 200

    Aerothermal modeling program. Phase 2, element B: Flow interaction experiment

    Get PDF
    NASA has instituted an extensive effort to improve the design process and data base for the hot section components of gas turbine engines. The purpose of element B is to establish a benchmark quality data set that consists of measurements of the interaction of circular jets with swirling flow. Such flows are typical of those that occur in the primary zone of modern annular combustion liners. Extensive computations of the swirling flows are to be compared with the measurements for the purpose of assessing the accuracy of current physical models used to predict such flows

    Human immunodeficiency virus infection of the human thymus and disruption of the thymic microenvironment in the SCID-hu mouse.

    Get PDF
    Infection with the human immunodeficiency virus (HIV) results in immunosuppression and depletion of circulating CD4+ T cells. Since the thymus is the primary organ in which T cells mature it is of interest to examine the effects of HIV infection in this tissue. HIV infection has been demonstrated in the thymuses of infected individuals and thymocytes have been previously demonstrated to be susceptible to HIV infection both in vivo, using the SCID-hu mouse, and in vitro. The present study sought to determine which subsets of thymocytes were infected in the SCID-hu mouse model and to evaluate HIV-related alterations in the thymic microenvironment. Using two different primary HIV isolates, infection was found in CD4+/CD8+ double positive thymocytes as well as in both the CD4+ and CD8+ single positive subsets of thymocytes. The kinetics of infection and resulting viral burden differed among the three thymocyte subsets and depended on which HIV isolate was used for infection. Thymic epithelial (TE) cells were also shown to endocytose virus and to often contain copious amounts of viral RNA in the cytoplasm by in situ hybridization, although productive infection of these cells could not be definitively shown. Furthermore, degenerating TE cells were observed even without detection of HIV in the degenerating cells. Two striking morphologic patterns of infection were seen, involving either predominantly thymocyte infection and depletion, or TE cell involvement with detectable cytoplasmic viral RNA and/or TE cell toxicity. Thus, a variety of cells in the human thymus is susceptible to HIV infection, and infection with HIV results in a marked disruption of the thymic microenvironment leading to depletion of thymocytes and degeneration of TE cells
    corecore