39,584 research outputs found

    Theorem proving support in programming language semantics

    Get PDF
    We describe several views of the semantics of a simple programming language as formal documents in the calculus of inductive constructions that can be verified by the Coq proof system. Covered aspects are natural semantics, denotational semantics, axiomatic semantics, and abstract interpretation. Descriptions as recursive functions are also provided whenever suitable, thus yielding a a verification condition generator and a static analyser that can be run inside the theorem prover for use in reflective proofs. Extraction of an interpreter from the denotational semantics is also described. All different aspects are formally proved sound with respect to the natural semantics specification.Comment: Propos\'e pour publication dans l'ouvrage \`a la m\'emoire de Gilles Kah

    The effect of negative polarity items on inference verification

    Get PDF
    The scalar approach to negative polarity item (NPI) licensing assumes that NPIs are allowable in contexts in which the introduction of the NPI leads to proposition strengthening (e.g., Kadmon & Landman 1993, Krifka 1995, Lahiri 1997, Chierchia 2006). A straightforward processing prediction from such a theory is that NPI’s facilitate inference verification from sets to subsets. Three experiments are reported that test this proposal. In each experiment, participants evaluated whether inferences from sets to subsets were valid. Crucially, we manipulated whether the premises contained an NPI. In Experiment 1, participants completed a metalinguistic reasoning task, and Experiments 2 and 3 tested reading times using a self-paced reading task. Contrary to expectations, no facilitation was observed when the NPI was present in the premise compared to when it was absent. In fact, the NPI significantly slowed down reading times in the inference region. Our results therefore favor those scalar theories that predict that the NPI is costly to process (Chierchia 2006), or other, nonscalar theories (Giannakidou 1998, Ladusaw 1992, Postal 2005, Szabolcsi 2004) that likewise predict NPI processing cost but, unlike Chierchia (2006), expect the magnitude of the processing cost to vary with the actual pragmatics of the NPI

    Abstract Interpretation-based verification/certification in the ciaoPP system

    Get PDF
    CiaoPP is the abstract interpretation-based preprocessor of the Ciao multi-paradigm (Constraint) Logic Programming system. It uses modular, incremental abstract interpretation as a fundamental tool to obtain information about programs. In CiaoPP, the semantic approximations thus produced have been applied to perform high- and low-level optimizations during program compilation, including transformations such as múltiple abstract specialization, parallelization, partial evaluation, resource usage control, and program verification. More recently, novel and promising applications of such semantic approximations are being applied in the more general context of program development such as program verification. In this work, we describe our extensión of the system to incorpórate Abstraction-Carrying Code (ACC), a novel approach to mobile code safety. ACC follows the standard strategy of associating safety certificates to programs, originally proposed in Proof Carrying- Code. A distinguishing feature of ACC is that we use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifícate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstractinterpreter. We have implemented and benchmarked ACC within CiaoPP. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable. Moreover, the preprocessor is based on compile-time (and run-time) tools for the certification of CLP programs with resource consumption assurances

    Use of metaknowledge in the verification of knowledge-based systems

    Get PDF
    Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness

    A Practical Type Analysis for Verification of Modular Prolog Programs

    Get PDF
    Regular types are a powerful tool for computing very precise descriptive types for logic programs. However, in the context of real life, modular Prolog programs, the accurate results obtained by regular types often come at the price of efficiency. In this paper we propose a combination of techniques aimed at improving analysis efficiency in this context. As a first technique we allow optionally reducing the accuracy of inferred types by using only the types defined by the user or present in the libraries. We claim that, for the purpose of verifying type signatures given in the form of assertions the precision obtained using this approach is sufficient, and show that analysis times can be reduced significantly. Our second technique is aimed at dealing with situations where we would like to limit the amount of reanalysis performed, especially for library modules. Borrowing some ideas from polymorphic type systems, we show how to solve the problem by admitting parameters in type specifications. This allows us to compose new call patterns with some pre computed analysis info without losing any information. We argue that together these two techniques contribute to the practical and scalable analysis and verification of types in Prolog programs

    Inferring Concise Specifications of APIs

    Get PDF
    Modern software relies on libraries and uses them via application programming interfaces (APIs). Correct API usage as well as many software engineering tasks are enabled when APIs have formal specifications. In this work, we analyze the implementation of each method in an API to infer a formal postcondition. Conventional wisdom is that, if one has preconditions, then one can use the strongest postcondition predicate transformer (SP) to infer postconditions. However, SP yields postconditions that are exponentially large, which makes them difficult to use, either by humans or by tools. Our key idea is an algorithm that converts such exponentially large specifications into a form that is more concise and thus more usable. This is done by leveraging the structure of the specifications that result from the use of SP. We applied our technique to infer postconditions for over 2,300 methods in seven popular Java libraries. Our technique was able to infer specifications for 75.7% of these methods, each of which was verified using an Extended Static Checker. We also found that 84.6% of resulting specifications were less than 1/4 page (20 lines) in length. Our technique was able to reduce the length of SMT proofs needed for verifying implementations by 76.7% and reduced prover execution time by 26.7%

    Cognitive Processing of Verbal Quantifiers in the Context of Affirmative and Negative Sentences: a Croatian Study

    Get PDF
    Studies from English and German have found differences in the processing of affirmative and negative sentences. However, little attention has been given to quantifiers that form negations. A picture-sentence verification task was used to investigate the processing of different types of quantifiers in Croatian: universal quantifiers in affirmative sentences (e.g. all), non-universal quantifiers in compositional negations (e.g. not all), null quantifiers in negative concord (e.g. none) and relative disproportionate quantifiers in both affirmative and negative sentences (e.g. some). The results showed that non-universal and null quantifiers, as well as negations were processed significantly slower compared to affirmative sentences, which is in line with previous findings supporting the two-step model. The results also confirmed that more complex tasks require a longer reaction time. A significant difference in the processing of same-polarity sentences with first-order quantifiers was observed: sentences with null quantifiers were processed faster and more accurately than sentences with disproportional and non-universal quantifiers. A difference in reaction time was also found in affirmatives with different quantifiers: sentences with universal quantifiers were processed significantly faster and more accurately compared to sentences with relative disproportionate quantifiers. These findings indicate that the processing of quantifiers follows after the processing of affirmative information. In the context of the two-step model, the processing of quantifiers occurs in the second step, along with negations

    SPEEDY: An Eclipse-based IDE for invariant inference

    Full text link
    SPEEDY is an Eclipse-based IDE for exploring techniques that assist users in generating correct specifications, particularly including invariant inference algorithms and tools. It integrates with several back-end tools that propose invariants and will incorporate published algorithms for inferring object and loop invariants. Though the architecture is language-neutral, current SPEEDY targets C programs. Building and using SPEEDY has confirmed earlier experience demonstrating the importance of showing and editing specifications in the IDEs that developers customarily use, automating as much of the production and checking of specifications as possible, and showing counterexample information directly in the source code editing environment. As in previous work, automation of specification checking is provided by back-end SMT solvers. However, reducing the effort demanded of software developers using formal methods also requires a GUI design that guides users in writing, reviewing, and correcting specifications and automates specification inference.Comment: In Proceedings F-IDE 2014, arXiv:1404.578

    The Effectiveness of the Guided Discovery Learning (GDL) Method Using a Contextual Approach Reviewed From Mathematical Critical Thinking Ability of Senior High School in Muna District

    Full text link
    Penelitian ini bertujuan untuk menguji efektivitas metode pembelajaran penemuan terbimbing menggunakan pendekatan kontekstual dalam hal keterampilan berpikir kritis siswa SMP. Jenis penelitian yang digunakan adalah eksperimen semu. Populasi dalam penelitian ini adalah semua siswa kelas VIII Sekolah Menengah Atas di Kecamatan Kontukowuna, Kabupaten Muna, Sulawesi Tenggara pada tahun 2016/2017. Pengumpulan data diperoleh melalui penyediaan empat item pertanyaan instrumen tes keterampilan berpikir kritis, di mana setiap pertanyaan mewakili indikator kemampuan berpikir kritis dan lembar observasi pelaksanaan pembelajaran. Hasil penelitian dianalisis menggunakan uji t satu sampel. Temuan menunjukkan bahwa nilai uji t 2,719> (t_0,05,27) = 2,0518, yang dapat disimpulkan bahwa metode pembelajaran penemuan terbimbing efektif dalam hal kemampuan berpikir kritis siswa SMP. Hasil ini didukung oleh peningkatan rata-rata pretest 27,66 ke posttest 76,00 yang lebih tinggi dari kriteria penguasaan mengajar minimum (KKM) dari 70
    corecore