20,882 research outputs found

    Gradual Liquid Type Inference

    Full text link
    Liquid typing provides a decidable refinement inference mechanism that is convenient but subject to two major issues: (1) inference is global and requires top-level annotations, making it unsuitable for inference of modular code components and prohibiting its applicability to library code, and (2) inference failure results in obscure error messages. These difficulties seriously hamper the migration of existing code to use refinements. This paper shows that gradual liquid type inference---a novel combination of liquid inference and gradual refinement types---addresses both issues. Gradual refinement types, which support imprecise predicates that are optimistically interpreted, can be used in argument positions to constrain liquid inference so that the global inference process e effectively infers modular specifications usable for library components. Dually, when gradual refinements appear as the result of inference, they signal an inconsistency in the use of static refinements. Because liquid refinements are drawn from a nite set of predicates, in gradual liquid type inference we can enumerate the safe concretizations of each imprecise refinement, i.e. the static refinements that justify why a program is gradually well-typed. This enumeration is useful for static liquid type error explanation, since the safe concretizations exhibit all the potential inconsistencies that lead to static type errors. We develop the theory of gradual liquid type inference and explore its pragmatics in the setting of Liquid Haskell.Comment: To appear at OOPSLA 201

    Training methods for facial image comparison: a literature review

    Get PDF
    This literature review was commissioned to explore the psychological literature relating to facial image comparison with a particular emphasis on whether individuals can be trained to improve performance on this task. Surprisingly few studies have addressed this question directly. As a consequence, this review has been extended to cover training of face recognition and training of different kinds of perceptual comparisons where we are of the opinion that the methodologies or findings of such studies are informative. The majority of studies of face processing have examined face recognition, which relies heavily on memory. This may be memory for a face that was learned recently (e.g. minutes or hours previously) or for a face learned longer ago, perhaps after many exposures (e.g. friends, family members, celebrities). Successful face recognition, irrespective of the type of face, relies on the ability to retrieve the to-berecognised face from long-term memory. This memory is then compared to the physically present image to reach a recognition decision. In contrast, in face matching task two physical representations of a face (live, photographs, movies) are compared and so long-term memory is not involved. Because the comparison is between two present stimuli rather than between a present stimulus and a memory, one might expect that face matching, even if not an easy task, would be easier to do and easier to learn than face recognition. In support of this, there is evidence that judgment tasks where a presented stimulus must be judged by a remembered standard are generally more cognitively demanding than judgments that require comparing two presented stimuli Davies & Parasuraman, 1982; Parasuraman & Davies, 1977; Warm and Dember, 1998). Is there enough overlap between face recognition and matching that it is useful to look at the literature recognition? No study has directly compared face recognition and face matching, so we turn to research in which people decided whether two non-face stimuli were the same or different. In these studies, accuracy of comparison is not always better when the comparator is present than when it is remembered. Further, all perceptual factors that were found to affect comparisons of simultaneously presented objects also affected comparisons of successively presented objects in qualitatively the same way. Those studies involved judgments about colour (Newhall, Burnham & Clark, 1957; Romero, Hita & Del Barco, 1986), and shape (Larsen, McIlhagga & Bundesen, 1999; Lawson, Bülthoff & Dumbell, 2003; Quinlan, 1995). Although one must be cautious in generalising from studies of object processing to studies of face processing (see, e.g., section comparing face processing to object processing), from these kinds of studies there is no evidence to suggest that there are qualitative differences in the perceptual aspects of how recognition and matching are done. As a result, this review will include studies of face recognition skill as well as face matching skill. The distinction between face recognition involving memory and face matching not involving memory is clouded in many recognition studies which require observers to decide which of many presented faces matches a remembered face (e.g., eyewitness studies). And of course there are other forensic face-matching tasks that will require comparison to both presented and remembered comparators (e.g., deciding whether any person in a video showing a crowd is the target person). For this reason, too, we choose to include studies of face recognition as well as face matching in our revie

    Who watches the watchers: Validating the ProB Validation Tool

    Full text link
    Over the years, ProB has moved from a tool that complemented proving, to a development environment that is now sometimes used instead of proving for applications, such as exhaustive model checking or data validation. This has led to much more stringent requirements on the integrity of ProB. In this paper we present a summary of our validation efforts for ProB, in particular within the context of the norm EN 50128 and safety critical applications in the railway domain.Comment: In Proceedings F-IDE 2014, arXiv:1404.578

    Rodin: an open toolset for modelling and reasoning in Event-B

    No full text
    Event-B is a formal method for system-level modelling and analysis. Key features of Event-B are the use of set theory as a modelling notation, the use of refinement to represent systems at different abstraction levels and the use of mathematical proof to verify consistency between refinement levels. In this article we present the Rodin modelling tool that seamlessly integrates modelling and proving. We outline how the Event-B language was designed to facilitate proof and how the tool has been designed to support changes to models while minimising the impact of changes on existing proofs. We outline the important features of the prover architecture and explain how well-definedness is treated. The tool is extensible and configurable so that it can be adapted more easily to different application domains and development methods

    Boost the Impact of Continuous Formal Verification in Industry

    Full text link
    Software model checking has experienced significant progress in the last two decades, however, one of its major bottlenecks for practical applications remains its scalability and adaptability. Here, we describe an approach to integrate software model checking techniques into the DevOps culture by exploiting practices such as continuous integration and regression tests. In particular, our proposed approach looks at the modifications to the software system since its last verification, and submits them to a continuous formal verification process, guided by a set of regression test cases. Our vision is to focus on the developer in order to integrate formal verification techniques into the developer workflow by using their main software development methodologies and tools.Comment: 7 page

    Lattice-Based Group Signatures: Achieving Full Dynamicity (and Deniability) with Ease

    Full text link
    In this work, we provide the first lattice-based group signature that offers full dynamicity (i.e., users have the flexibility in joining and leaving the group), and thus, resolve a prominent open problem posed by previous works. Moreover, we achieve this non-trivial feat in a relatively simple manner. Starting with Libert et al.'s fully static construction (Eurocrypt 2016) - which is arguably the most efficient lattice-based group signature to date, we introduce simple-but-insightful tweaks that allow to upgrade it directly into the fully dynamic setting. More startlingly, our scheme even produces slightly shorter signatures than the former, thanks to an adaptation of a technique proposed by Ling et al. (PKC 2013), allowing to prove inequalities in zero-knowledge. Our design approach consists of upgrading Libert et al.'s static construction (EUROCRYPT 2016) - which is arguably the most efficient lattice-based group signature to date - into the fully dynamic setting. Somewhat surprisingly, our scheme produces slightly shorter signatures than the former, thanks to a new technique for proving inequality in zero-knowledge without relying on any inequality check. The scheme satisfies the strong security requirements of Bootle et al.'s model (ACNS 2016), under the Short Integer Solution (SIS) and the Learning With Errors (LWE) assumptions. Furthermore, we demonstrate how to equip the obtained group signature scheme with the deniability functionality in a simple way. This attractive functionality, put forward by Ishida et al. (CANS 2016), enables the tracing authority to provide an evidence that a given user is not the owner of a signature in question. In the process, we design a zero-knowledge protocol for proving that a given LWE ciphertext does not decrypt to a particular message
    corecore