30 research outputs found

    Relatively Complete Verification of Probabilistic Programs: An Expressive Language for Expectation-Based Reasoning

    Get PDF
    We study a syntax for specifying quantitative “assertions” - functions mapping program states to numbers - for probabilistic program verification. We prove that our syntax is expressive in the following sense: Given any probabilistic program C, if a function f is expressible in our syntax, then the function mapping each initial state σ to the expected value of f evaluated in the final states reached after termination C on σ (also called the weakest preexpectation wp[C](f)) is also expressible in our syntax. As a consequence, we obtain a relatively complete verification system for verifying expected values and probabilities in the sense of Cook: Apart from a single reasoning step about the inequality of two functions given as syntactic expressions in our language, given f, g, and C, we can check whether g ≤ wp[C](f)

    A Pre-expectation Calculus for Probabilistic Sensitivity

    Get PDF
    Sensitivity properties describe how changes to the input of a program affect the output, typically by upper bounding the distance between the outputs of two runs by a monotone function of the distance between the corresponding inputs. When programs are probabilistic, the distance between outputs is a distance between distributions. The Kantorovich lifting provides a general way of defining a distance between distributions by lifting the distance of the underlying sample space; by choosing an appropriate distance on the base space, one can recover other usual probabilistic distances, such as the Total Variation distance. We develop a relational pre-expectation calculus to upper bound the Kantorovich distance between two executions of a probabilistic program. We illustrate our methods by proving algorithmic stability of a machine learning algorithm, convergence of a reinforcement learning algorithm, and fast mixing for card shuffling algorithms. We also consider some extensions: using our calculus to show convergence of Markov chains to the uniform distribution over states and an asynchronous extension to reason about pairs of program executions with different control flow

    'MRI-negative PET-positive' temporal lobe epilepsy (TLE) and mesial TLE differ with quantitative MRI and PET: a case control study

    Get PDF
    Background: \u27MRI negative PET positive temporal lobe epilepsy\u27 represents a substantial minority of temporal lobe epilepsy (TLE). Clinicopathological and qualitative imaging differences from mesial temporal lobe epilepsy are reported. We aimed to compare TLE with hippocampal sclerosis (HS+ve) and non lesional TLE without HS (HS-ve) on MRI, with respect to quantitative FDG-PET and MRI measures.Methods: 30 consecutive HS-ve patients with well-lateralised EEG were compared with 30 age- and sex-matched HS+ve patients with well-lateralised EEG. Cerebral, cortical lobar and hippocampal volumetric and co-registered FDG-PET metabolic analyses were performed.Results: There was no difference in whole brain, cerebral or cerebral cortical volumes. Both groups showed marginally smaller cerebral volumes ipsilateral to epileptogenic side (HS-ve 0.99, p = 0.02, HS+ve 0.98, p &lt; 0.001). In HS+ve, the ratio of epileptogenic cerebrum to whole brain volume was less (p = 0.02); the ratio of epileptogenic cerebral cortex to whole brain in the HS+ve group approached significance (p = 0.06). Relative volume deficits were seen in HS+ve in insular and temporal lobes. Both groups showed marked ipsilateral hypometabolism (p &lt; 0.001), most marked in temporal cortex. Mean hypointensity was more marked in epileptogenic-to-contralateral hippocampus in HS+ve (ratio: 0.86 vs 0.95, p &lt; 0.001). The mean FDG-PET ratio of ipsilateral to contralateral cerebral cortex however was low in both groups (ratio: HS-ve 0.97, p &lt; 0.0001; HS+ve 0.98, p = 0.003), and more marked in HS-ve across all lobes except insula.Conclusion: Overall, HS+ve patients showed more hippocampal, but also marginally more ipsilateral cerebral and cerebrocortical atrophy, greater ipsilateral hippocampal hypometabolism but similar ipsilateral cerebral cortical hypometabolism, confirming structural and functional differences between these groups.<br /

    BMBF-Fördernummer: 03KIS036A, 03KIS036B, 03KIS036C, 03KIS036D

    No full text

    �ber R�ntgenlumineszenz von Quarz

    No full text

    Modular specification and verification of closures in Rust

    No full text
    Closures are a language feature supported by many mainstream languages, combining the ability to package up references to code blocks with the possibility of capturing state from the environment of the closure's declaration. Closures are powerful, but complicate understanding and formal reasoning, especially when closure invocations may mutate objects reachable from the captured state or from closure arguments. This paper presents a novel technique for the modular specification and verification of closure-manipulating code in Rust. Our technique combines Rust's type system guarantees and novel specification features to enable formal verification of rich functional properties. It encodes higher-order concerns into a first-order logic, which enables automation via SMT solvers. Our technique is implemented as an extension of the deductive verifier Prusti, with which we have successfully verified many common idioms of closure usage.ISSN:2475-142

    How do programmers use unsafe rust?

    No full text
    Rust’s ownership type system enforces a strict discipline on how memory locations are accessed and shared. This discipline allows the compiler to statically prevent memory errors, data races, inadvertent side effects through aliasing, and other errors that frequently occur in conventional imperative programs. However, the restrictions imposed by Rust’s type system make it difficult or impossible to implement certain designs, such as data structures that require aliasing (e.g. doubly-linked lists and shared caches). To work around this limitation, Rust allows code blocks to be declared as unsafe and thereby exempted from certain restrictions of the type system, for instance, to manipulate C-style raw pointers. Ensuring the safety of unsafe code is the responsibility of the programmer. However, an important assumption of the Rust language, which we dub the Rust hypothesis, is that programmers use Rust by following three main principles: use unsafe code sparingly, make it easy to review, and hide it behind a safe abstraction such that client code can be written in safe Rust. Understanding how Rust programmers use unsafe code and, in particular, whether the Rust hypothesis holds is essential for Rust developers and testers, language and library designers, as well as tool developers. This paper studies empirically how unsafe code is used in practice by analysing a large corpus of Rust projects to assess the validity of the Rust hypothesis and to classify the purpose of unsafe code. We identify queries that can be answered by automatically inspecting the program’s source code, its intermediate representation MIR, as well as type information provided by the Rust compiler; we complement the results by manual code inspection. Our study supports the Rust hypothesis partially: While most unsafe code is simple and well-encapsulated, unsafe features are used extensively, especially for interoperability with other languages.ISSN:2475-142
    corecore