7,326 research outputs found

    Processes endure, whereas events occur

    Get PDF
    In this essay, we aim to help clarify the nature of so-called 'occurrences' by attributing distinct modes of existence and persistence to processes and events. In doing so, we break with the perdurantism claimed by DOLCE’s authors and we distance ourselves from mereological analyzes like those recently conducted by Guarino to distinguish between 'processes' and 'episodes'. In line with the works of Stout and Galton, we first bring closer (physical) processes and objects in their way of enduring by proposing for processes a notion of dynamic presence (contrasting with a static presence for objects). Then, on the events side, we attribute to them the status of abstract entities by identifying them with objects of thought (by individual and collective subjects), and this allows us to distinguish for themselves between existence and occurrence. We therefore identify them with psychological (or even social) endurants, which may contingently occur

    Inferring Concise Specifications of APIs

    Get PDF
    Modern software relies on libraries and uses them via application programming interfaces (APIs). Correct API usage as well as many software engineering tasks are enabled when APIs have formal specifications. In this work, we analyze the implementation of each method in an API to infer a formal postcondition. Conventional wisdom is that, if one has preconditions, then one can use the strongest postcondition predicate transformer (SP) to infer postconditions. However, SP yields postconditions that are exponentially large, which makes them difficult to use, either by humans or by tools. Our key idea is an algorithm that converts such exponentially large specifications into a form that is more concise and thus more usable. This is done by leveraging the structure of the specifications that result from the use of SP. We applied our technique to infer postconditions for over 2,300 methods in seven popular Java libraries. Our technique was able to infer specifications for 75.7% of these methods, each of which was verified using an Extended Static Checker. We also found that 84.6% of resulting specifications were less than 1/4 page (20 lines) in length. Our technique was able to reduce the length of SMT proofs needed for verifying implementations by 76.7% and reduced prover execution time by 26.7%

    Generalized Strong Preservation by Abstract Interpretation

    Full text link
    Standard abstract model checking relies on abstract Kripke structures which approximate concrete models by gluing together indistinguishable states, namely by a partition of the concrete state space. Strong preservation for a specification language L encodes the equivalence of concrete and abstract model checking of formulas in L. We show how abstract interpretation can be used to design abstract models that are more general than abstract Kripke structures. Accordingly, strong preservation is generalized to abstract interpretation-based models and precisely related to the concept of completeness in abstract interpretation. The problem of minimally refining an abstract model in order to make it strongly preserving for some language L can be formulated as a minimal domain refinement in abstract interpretation in order to get completeness w.r.t. the logical/temporal operators of L. It turns out that this refined strongly preserving abstract model always exists and can be characterized as a greatest fixed point. As a consequence, some well-known behavioural equivalences, like bisimulation, simulation and stuttering, and their corresponding partition refinement algorithms can be elegantly characterized in abstract interpretation as completeness properties and refinements

    What particle verbs have to do with grammatical aspect in early child english

    Get PDF
    The current study investigates the relation between aspect and particle verbs in the acquisition of English. Its purpose is to determine whether children associate telicity, as argued in previous studies, or rather perfectivity, which entails completion of a telic situation, with their early particle verb use. The study analyzes naturalistic data of four monolingual children between 1;6 and 3;8 from CHILDES acquiring English as their first language. On the one hand, it finds that children use both –ed and irregular perfective morphology with simplex verbs before particle verbs. They further use imperfective before perfective morphology with particle verbs. These findings suggest that there is no correlation between telic particle verbs and perfective morphology, as would have been predicted on an account which claims that lexical aspect of predicates guides the acquisition of grammatical aspect (Olsen & Weinberg 1999). On the other hand, the study finds that the children’s particle verbs denote telic situations from early on, but not half of them were used to refer to situations that are also completed. This finding questions analyses which claim that, at an initial stage, children will only interpret predicates as telic if they refer to situations that are at the same time completed. Completion information is not necessary for children in order to use particle verbs correctly for telic situations, as would have been predicted on an extended account along the lines of Wagner (2001). As a conclusion, it is suggested that the divergent findings result from a difference in methodology. While restrictions of perfective and imperfective morphology to particular classes of lexical aspect pertain to the production of grammatical aspect morphology, perfective and imperfective viewpoints on situations pertain to the level of interpretation of telic and atelic situations

    Monitoring-Oriented Programming: A Tool-Supported Methodology for Higher Quality Object-Oriented Software

    Get PDF
    This paper presents a tool-supported methodological paradigm for object-oriented software development, called monitoring-oriented programming and abbreviated MOP, in which runtime monitoring is a basic software design principle. The general idea underlying MOP is that software developers insert specifications in their code via annotations. Actual monitoring code is automatically synthesized from these annotations before compilation and integrated at appropriate places in the program, according to user-defined configuration attributes. This way, the specification is checked at runtime against the implementation. Moreover, violations and/or validations of specifications can trigger user-defined code at any points in the program, in particular recovery code, outputting or sending messages, or raising exceptions. The MOP paradigm does not promote or enforce any specific formalism to specify requirements: it allows the users to plug-in their favorite or domain-specific specification formalisms via logic plug-in modules. There are two major technical challenges that MOP supporting tools unavoidably face: monitor synthesis and monitor integration. The former is heavily dependent on the specification formalism and comes as part of the corresponding logic plug-in, while the latter is uniform for all specification formalisms and depends only on the target programming language. An experimental prototype tool, called Java-MOP, is also discussed, which currently supports most but not all of the desired MOP features. MOP aims at reducing the gap between formal specification and implementation, by integrating the two and allowing them together to form a system
    • …
    corecore