14,331 research outputs found

    Quantifying dependencies for sensitivity analysis with multivariate input sample data

    Get PDF
    We present a novel method for quantifying dependencies in multivariate datasets, based on estimating the R\'{e}nyi entropy by minimum spanning trees (MSTs). The length of the MSTs can be used to order pairs of variables from strongly to weakly dependent, making it a useful tool for sensitivity analysis with dependent input variables. It is well-suited for cases where the input distribution is unknown and only a sample of the inputs is available. We introduce an estimator to quantify dependency based on the MST length, and investigate its properties with several numerical examples. To reduce the computational cost of constructing the exact MST for large datasets, we explore methods to compute approximations to the exact MST, and find the multilevel approach introduced recently by Zhong et al. (2015) to be the most accurate. We apply our proposed method to an artificial testcase based on the Ishigami function, as well as to a real-world testcase involving sediment transport in the North Sea. The results are consistent with prior knowledge and heuristic understanding, as well as with variance-based analysis using Sobol indices in the case where these indices can be computed

    Characterizing downwards closed, strongly first order, relativizable dependencies

    Full text link
    In Team Semantics, a dependency notion is strongly first order if every sentence of the logic obtained by adding the corresponding atoms to First Order Logic is equivalent to some first order sentence. In this work it is shown that all nontrivial dependency atoms that are strongly first order, downwards closed, and relativizable (in the sense that the relativizations of the corresponding atoms with respect to some unary predicate are expressible in terms of them) are definable in terms of constancy atoms. Additionally, it is shown that any strongly first order dependency is safe for any family of downwards closed dependencies, in the sense that every sentence of the logic obtained by adding to First Order Logic both the strongly first order dependency and the downwards closed dependencies is equivalent to some sentence of the logic obtained by adding only the downwards closed dependencies

    Optimal Uncertainty Quantification

    Get PDF
    We propose a rigorous framework for Uncertainty Quantification (UQ) in which the UQ objectives and the assumptions/information set are brought to the forefront. This framework, which we call \emph{Optimal Uncertainty Quantification} (OUQ), is based on the observation that, given a set of assumptions and information about the problem, there exist optimal bounds on uncertainties: these are obtained as values of well-defined optimization problems corresponding to extremizing probabilities of failure, or of deviations, subject to the constraints imposed by the scenarios compatible with the assumptions and information. In particular, this framework does not implicitly impose inappropriate assumptions, nor does it repudiate relevant information. Although OUQ optimization problems are extremely large, we show that under general conditions they have finite-dimensional reductions. As an application, we develop \emph{Optimal Concentration Inequalities} (OCI) of Hoeffding and McDiarmid type. Surprisingly, these results show that uncertainties in input parameters, which propagate to output uncertainties in the classical sensitivity analysis paradigm, may fail to do so if the transfer functions (or probability distributions) are imperfectly known. We show how, for hierarchical structures, this phenomenon may lead to the non-propagation of uncertainties or information across scales. In addition, a general algorithmic framework is developed for OUQ and is tested on the Caltech surrogate model for hypervelocity impact and on the seismic safety assessment of truss structures, suggesting the feasibility of the framework for important complex systems. The introduction of this paper provides both an overview of the paper and a self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository Research Papers). See SIAM Review for higher quality figure

    Logic Programming as Constructivism

    Get PDF
    The features of logic programming that seem unconventional from the viewpoint of classical logic can be explained in terms of constructivistic logic. We motivate and propose a constructivistic proof theory of non-Horn logic programming. Then, we apply this formalization for establishing results of practical interest. First, we show that 'stratification can be motivated in a simple and intuitive way. Relying on similar motivations, we introduce the larger classes of 'loosely stratified' and 'constructively consistent' programs. Second, we give a formal basis for introducing quantifiers into queries and logic programs by defining 'constructively domain independent* formulas. Third, we extend the Generalized Magic Sets procedure to loosely stratified and constructively consistent programs, by relying on a 'conditional fixpoini procedure

    A correct, precise and efficient integration of set-sharing, freeness and linearity for the analysis of finite and rational tree languages

    Get PDF
    It is well known that freeness and linearity information positively interact with aliasing information, allowing both the precision and the efficiency of the sharing analysis of logic programs to be improved. In this paper, we present a novel combination of set-sharing with freeness and linearity information, which is characterized by an improved abstract unification operator. We provide a new abstraction function and prove the correctness of the analysis for both the finite tree and the rational tree cases. Moreover, we show that the same notion of redundant information as identified in Bagnara et al. (2000) and Zaffanella et al. (2002) also applies to this abstract domain combination: this allows for the implementation of an abstract unification operator running in polynomial time and achieving the same precision on all the considered observable properties

    Open TURNS: An industrial software for uncertainty quantification in simulation

    Full text link
    The needs to assess robust performances for complex systems and to answer tighter regulatory processes (security, safety, environmental control, and health impacts, etc.) have led to the emergence of a new industrial simulation challenge: to take uncertainties into account when dealing with complex numerical simulation frameworks. Therefore, a generic methodology has emerged from the joint effort of several industrial companies and academic institutions. EDF R&D, Airbus Group and Phimeca Engineering started a collaboration at the beginning of 2005, joined by IMACS in 2014, for the development of an Open Source software platform dedicated to uncertainty propagation by probabilistic methods, named OpenTURNS for Open source Treatment of Uncertainty, Risk 'N Statistics. OpenTURNS addresses the specific industrial challenges attached to uncertainties, which are transparency, genericity, modularity and multi-accessibility. This paper focuses on OpenTURNS and presents its main features: openTURNS is an open source software under the LGPL license, that presents itself as a C++ library and a Python TUI, and which works under Linux and Windows environment. All the methodological tools are described in the different sections of this paper: uncertainty quantification, uncertainty propagation, sensitivity analysis and metamodeling. A section also explains the generic wrappers way to link openTURNS to any external code. The paper illustrates as much as possible the methodological tools on an educational example that simulates the height of a river and compares it to the height of a dyke that protects industrial facilities. At last, it gives an overview of the main developments planned for the next few years

    Hiding variables when decomposing specifications into GR(1) contracts

    Get PDF
    We propose a method for eliminating variables from component specifications during the decomposition of GR(1) properties into contracts. The variables that can be eliminated are identified by parameterizing the communication architecture to investigate the dependence of realizability on the availability of information. We prove that the selected variables can be hidden from other components, while still expressing the resulting specification as a game with full information with respect to the remaining variables. The values of other variables need not be known all the time, so we hide them for part of the time, thus reducing the amount of information that needs to be communicated between components. We improve on our previous results on algorithmic decomposition of GR(1) properties, and prove existence of decompositions in the full information case. We use semantic methods of computation based on binary decision diagrams. To recover the constructed specifications so that humans can read them, we implement exact symbolic minimal covering over the lattice of integer orthotopes, thus deriving minimal formulae in disjunctive normal form over integer variable intervals
    corecore