485,108 research outputs found

    Model checking for linear temporal logic: An efficient implementation

    Get PDF
    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms

    k-Step Relative Inductive Generalization

    Full text link
    We introduce a new form of SAT-based symbolic model checking. One common idea in SAT-based symbolic model checking is to generate new clauses from states that can lead to property violations. Our previous work suggests applying induction to generalize from such states. While effective on some benchmarks, the main problem with inductive generalization is that not all such states can be inductively generalized at a given time in the analysis, resulting in long searches for generalizable states on some benchmarks. This paper introduces the idea of inductively generalizing states relative to kk-step over-approximations: a given state is inductively generalized relative to the latest kk-step over-approximation relative to which the negation of the state is itself inductive. This idea motivates an algorithm that inductively generalizes a given state at the highest level kk so far examined, possibly by generating more than one mutually kk-step relative inductive clause. We present experimental evidence that the algorithm is effective in practice.Comment: 14 page

    Heterogeneous Graph Reasoning for Fact Checking over Texts and Tables

    Full text link
    Fact checking aims to predict claim veracity by reasoning over multiple evidence pieces. It usually involves evidence retrieval and veracity reasoning. In this paper, we focus on the latter, reasoning over unstructured text and structured table information. Previous works have primarily relied on fine-tuning pretrained language models or training homogeneous-graph-based models. Despite their effectiveness, we argue that they fail to explore the rich semantic information underlying the evidence with different structures. To address this, we propose a novel word-level Heterogeneous-graph-based model for Fact Checking over unstructured and structured information, namely HeterFC. Our approach leverages a heterogeneous evidence graph, with words as nodes and thoughtfully designed edges representing different evidence properties. We perform information propagation via a relational graph neural network, facilitating interactions between claims and evidence. An attention-based method is utilized to integrate information, combined with a language model for generating predictions. We introduce a multitask loss function to account for potential inaccuracies in evidence retrieval. Comprehensive experiments on the large fact checking dataset FEVEROUS demonstrate the effectiveness of HeterFC. Code will be released at: https://github.com/Deno-V/HeterFC.Comment: Accepted by 38th Association for the Advancement of Artificial Intelligence, AAA

    Trail-directed model checking

    Get PDF
    HSF-SPIN is a Promela model checker based on heuristic search strategies. It utilizes heuristic estimates in order to direct the search for finding software bugs in concurrent systems. As a consequence, HSF-SPIN is able to find shorter trails than blind depth-first search. This paper contributes an extension to the paradigm of directed model checking to shorten already established unacceptable long error trails. This approach has been implemented in HSF-SPIN. For selected benchmark and industrial communication protocols experimental evidence is given that trail-directed model-checking effectively shortcuts existing witness paths

    Statistical Reasoning: Choosing and Checking the Ingredients, Inferences Based on a Measure of Statistical Evidence with Some Applications

    Full text link
    The features of a logically sound approach to a theory of statistical reasoning are discussed. A particular approach that satisfies these criteria is reviewed. This is seen to involve selection of a model, model checking, elicitation of a prior, checking the prior for bias, checking for prior-data conflict and estimation and hypothesis assessment inferences based on a measure of evidence. A long-standing anomalous example is resolved by this approach to inference and an application is made to a practical problem of considerable importance which, among other novel aspects of the analysis, involves the development of a relevant elicitation algorithm

    Automata-theoretic and bounded model checking for linear temporal logic

    Get PDF
    In this work we study methods for model checking the temporal logic LTL. The focus is on the automata-theoretic approach to model checking and bounded model checking. We begin by examining automata-theoretic methods to model check LTL safety properties. The model checking problem can be reduced to checking whether the language of a finite state automaton on finite words is empty. We describe an efficient algorithm for generating small finite state automata for so called non-pathological safety properties. The presented implementation is the first tool able to decide whether a formula is non-pathological. The experimental results show that treating safety properties can benefit model checking at very little cost. In addition, we find supporting evidence for the view that minimising the automaton representing the property does not always lead to a small product state space. A deterministic property automaton can result in a smaller product state space even though it might have a larger number states. Next we investigate modular analysis. Modular analysis is a state space reduction method for modular Petri nets. The method can be used to construct a reduced state space called the synchronisation graph. We devise an on-the-fly automata-theoretic method for model checking the behaviour of a modular Petri net from the synchronisation graph. The solution is based on reducing the model checking problem to an instance of verification with testers. We analyse the tester verification problem and present an efficient on-the-fly algorithm, the first complete solution to tester verification problem, based on generalised nested depth-first search. We have also studied propositional encodings for bounded model checking LTL. A new simple linear sized encoding is developed and experimentally evaluated. The implementation in the NuSMV2 model checker is competitive with previously presented encodings. We show how to generalise the LTL encoding to a more succint logic: LTL with past operators. The generalised encoding compares favourably with previous encodings for LTL with past operators. Links between bounded model checking and the automata-theoretic approach are also explored.reviewe

    A diagnostic m-test for distributional specification of parametric conditional heteroscedasticity models for financial data

    Full text link
    This paper proposes a convenient and generally applicable diagnostic m-test for checking the distributional specification of parametric conditional heteroscedasticity models for financial data such as the customary student-t GARCH Model. The proposed test is based on the moments of the probability integral transform of the estimated innovations of the assumed model. Monte-Carlo evidence indicates that our suggested test performs well both in terms of size and power

    The Ouroboros Model

    Get PDF
    At the core of the Ouroboros Model lies a self-referential recursive process with alternating phases of data acquisition and evaluation. Memory entries are organized in schemata. Activation at a time of part of a schema biases the whole structure and, in particular, missing features, thus triggering expectations. An iterative recursive monitor process termed ‘consumption analysis’ is then checking how well such expectations fit with successive activations. A measure for the goodness of fit, “emotion”, provides feedback as (self-) monitoring signal. Contradictions between anticipations based on previous experience and actual current data are highlighted as well as minor gaps and deficits. The basic algorithm can be applied to goal directed movements as well as to abstract rational reasoning when weighing evidence for and against some remote theories. A sketch is provided how the Ouroboros Model can shed light on rather different characteristics of human behavior including learning and meta-learning. Partial implementations proved effective in dedicated safety systems

    Using realistic trading strategies in an agent-based stock market model

    Get PDF
    The use of agent-based models (ABMs) has increased in the last years to simulate social systems and, in particular, financial markets. ABMs of financial markets are usually validated by checking the ability of the model to reproduce a set of empirical stylised facts. However, other common-sense evidence is available which is often not taken into account, ending with models which are valid but not sensible. In this paper we present an ABM of a stock market which incorporates this type of common-sense evidence and implements realistic trading strategies based on practitioners literature. We next validate the model using a comprehensive approach consisting of four steps: assessment of face validity, sensitivity analysis, calibration and validation of model outputs

    PANACEA: An Automated Misinformation Detection System on COVID-19

    Get PDF
    In this demo, we introduce a web-based misinformation detection system PANACEA on COVID-19 related claims, which has two modules, fact-checking and rumour detection. Our fact-checking module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-of-the-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This rumour detection module assists by warning the users in the early stages when a knowledge base may not be available
    corecore