15,864 research outputs found

    Generating Property-Directed Potential Invariants By Backward Analysis

    Full text link
    This paper addresses the issue of lemma generation in a k-induction-based formal analysis of transition systems, in the linear real/integer arithmetic fragment. A backward analysis, powered by quantifier elimination, is used to output preimages of the negation of the proof objective, viewed as unauthorized states, or gray states. Two heuristics are proposed to take advantage of this source of information. First, a thorough exploration of the possible partitionings of the gray state space discovers new relations between state variables, representing potential invariants. Second, an inexact exploration regroups and over-approximates disjoint areas of the gray state space, also to discover new relations between state variables. k-induction is used to isolate the invariants and check if they strengthen the proof objective. These heuristics can be used on the first preimage of the backward exploration, and each time a new one is output, refining the information on the gray states. In our context of critical avionics embedded systems, we show that our approach is able to outperform other academic or commercial tools on examples of interest in our application field. The method is introduced and motivated through two main examples, one of which was provided by Rockwell Collins, in a collaborative formal verification framework.Comment: In Proceedings FTSCS 2012, arXiv:1212.657

    Modeling of Phenomena and Dynamic Logic of Phenomena

    Get PDF
    Modeling of complex phenomena such as the mind presents tremendous computational complexity challenges. Modeling field theory (MFT) addresses these challenges in a non-traditional way. The main idea behind MFT is to match levels of uncertainty of the model (also, problem or theory) with levels of uncertainty of the evaluation criterion used to identify that model. When a model becomes more certain, then the evaluation criterion is adjusted dynamically to match that change to the model. This process is called the Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes of the mind and natural evolution. This paper provides a formal description of DLP by specifying its syntax, semantics, and reasoning system. We also outline links between DLP and other logical approaches. Computational complexity issues that motivate this work are presented using an example of polynomial models

    From spreadsheets to relational databases and back

    Get PDF
    This paper presents techniques and tools to transform spreadsheets into relational databases and back. A set of data refinement rules is introduced to map a tabular datatype into a relational database schema. Having expressed the transformation of the two data models as data refinements, we obtain for free the functions that migrate the data. We use well-known relational database techniques to optimize and query the data. Because data refinements define bidirectional transformations we can map such database back to an optimized spreadsheet. We have implemented the data refinement rules and we have constructed tools to manipulate, optimize and refactor Excel-like spreadsheets.(undefined

    Whittaker limits of difference spherical functions

    Get PDF
    We introduce the (global) q-Whittaker function as the limit at t=0 of the q,t-spherical function extending the symmetric Macdonald polynomials to arbitrary eigenvalues. The construction heavily depends on the technique of the q-Gaussians developed by the author (and Stokman in the non-reduced case). In this approach, the q-Whittaker function is given by a series convergent everywhere, a kind of generating function for multi-dimensional q-Hermite polynomials (closely related to the level 1 Demazure characters). One of the applications is a q-version of the Shintani- Casselman- Shalika formula, which appeared directly connected with q-Mehta- Macdonald identities in terms of the Jackson integrals. This formula generalizes that of type A due to Gerasimov et al. to arbitrary reduced root systems. At the end of the paper, we obtain a q,t-counterpart of the Harish-Chandra asymptotic formula for the spherical functions, including the Whittaker limit.Comment: V2: a discussion of the one-dimensional case was added. V3: Jackson integration and growth estimates were added. V4: a q-variant of the Harish-Chandra asymptotic formula for spherical functions was added. V5: editing, some improvements, adding references. V6: General editin

    Fock factorizations, and decompositions of the L2L^2 spaces over general Levy processes

    Full text link
    We explicitly construct and study an isometry between the spaces of square integrable functionals of an arbitrary Levy process and a vector-valued Gaussian white noise. In particular, we obtain explicit formulas for this isometry at the level of multiplicative functionals and at the level of orthogonal decompositions, as well as find its kernel. We consider in detail the central special case: the isometry between the L2L^2 spaces over a Poisson process and the corresponding white noise. The key role in our considerations is played by the notion of measure and Hilbert factorizations and related notions of multiplicative and additive functionals and logarithm. The obtained results allow us to introduce a canonical Fock structure (an analogue of the Wiener--Ito decomposition) in the L2L^2 space over an arbitrary Levy process. An application to the representation theory of current groups is considered. An example of a non-Fock factorization is given.Comment: 35 pages; LaTeX; to appear in Russian Math. Survey

    Purposive discovery of operations

    Get PDF
    The Generate, Prune & Prove (GPP) methodology for discovering definitions of mathematical operators is introduced. GPP is a task within the IL exploration discovery system. We developed GPP for use in the discovery of mathematical operators with a wider class of representations than was possible with the previous methods by Lenat and by Shen. GPP utilizes the purpose for which an operator is created to prune the possible definitions. The relevant search spaces are immense and there exists insufficient information for a complete evaluation of the purpose constraint, so it is necessary to perform a partial evaluation of the purpose (i.e., pruning) constraint. The constraint is first transformed so that it is operational with respect to the partial information, and then it is applied to examples in order to test the generated candidates for an operator's definition. In the GPP process, once a candidate definition survives this empirical prune, it is passed on to a theorem prover for formal verification. We describe the application of this methodology to the (re)discovery of the definition of multiplication for Conway numbers, a discovery which is difficult for human mathematicians. We successfully model this discovery process utilizing information which was reasonably available at the time of Conway's original discovery. As part of this discovery process, we reduce the size of the search space from a computationally intractable size to 3468 elements
    corecore