19 research outputs found

    RML: Runtime Monitoring Language

    Get PDF
    Runtime verification is a relatively new software verification technique that aims to prove the correctness of a specific run of a program, rather than statically verify the code. The program is instrumented in order to collect all the relevant information, and the resulting trace of events is inspected by a monitor that verifies its compliance with respect to a specification of the expected properties of the system under scrutiny. Many languages exist that can be used to formally express the expected behavior of a system, with different design choices and degrees of expressivity. This thesis presents RML, a specification language designed for runtime verification, with the goal of being completely modular and independent from the instrumentation and the kind of system being monitored. RML is highly expressive, and allows one to express complex, parametric, non-context-free properties concisely. RML is compiled down to TC, a lower level calculus, which is fully formalized with a deterministic, rewriting-based semantics. In order to evaluate the approach, an open source implementation has been developed, and several examples with Node.js programs have been tested. Benchmarks show the ability of the monitors automatically generated from RML specifications to effectively and efficiently verify complex properties

    Advancements and Challenges in Object-Centric Process Mining: A Systematic Literature Review

    Full text link
    Recent years have seen the emergence of object-centric process mining techniques. Born as a response to the limitations of traditional process mining in analyzing event data from prevalent information systems like CRM and ERP, these techniques aim to tackle the deficiency, convergence, and divergence issues seen in traditional event logs. Despite the promise, the adoption in real-world process mining analyses remains limited. This paper embarks on a comprehensive literature review of object-centric process mining, providing insights into the current status of the discipline and its historical trajectory

    State-of-the-art authentication and verification schemes in VANETs:A survey

    Get PDF
    Vehicular Ad-Hoc Networks (VANETs), a subset of Mobile Ad-Hoc Networks (MANETs), are wireless networks formed around moving vehicles, enabling communication between vehicles, roadside infrastructure, and servers. With the rise of autonomous and connected vehicles, security concerns surrounding VANETs have grown. VANETs still face challenges related to privacy with full-scale deployment due to a lack of user trust. Critical factors shaping VANETs include their dynamic topology and high mobility characteristics. Authentication protocols emerge as the cornerstone of enabling the secure transmission of entities within a VANET. Despite concerted efforts, there remains a need to incorporate verification approaches for refining authentication protocols. Formal verification constitutes a mathematical approach enabling developers to validate protocols and rectify design errors with precision. Therefore, this review focuses on authentication protocols as a pivotal element for securing entity transmission within VANETs. It presents a comparative analysis of existing protocols, identifies research gaps, and introduces a novel framework that incorporates formal verification and threat modeling. The review considers key factors influencing security, sheds light on ongoing challenges, and emphasises the significance of user trust. The proposed framework not only enhances VANET security but also contributes to the growing field of formal verification in the automotive domain. As the outcomes of this study, several research gaps, challenges, and future research directions are identified. These insights would offer valuable guidance for researchers to establish secure authentication communication within VANETs

    Automated Validation of State-Based Client-Centric Isolation with TLA <sup>+</sup>

    Get PDF
    Clear consistency guarantees on data are paramount for the design and implementation of distributed systems. When implementing distributed applications, developers require approaches to verify the data consistency guarantees of an implementation choice. Crooks et al. define a state-based and client-centric model of database isolation. This paper formalizes this state-based model in, reproduces their examples and shows how to model check runtime traces and algorithms with this formalization. The formalized model in enables semi-automatic model checking for different implementation alternatives for transactional operations and allows checking of conformance to isolation levels. We reproduce examples of the original paper and confirm the isolation guarantees of the combination of the well-known 2-phase locking and 2-phase commit algorithms. Using model checking this formalization can also help finding bugs in incorrect specifications. This improves feasibility of automated checking of isolation guarantees in synthesized synchronization implementations and it provides an environment for experimenting with new designs.</p

    The Best a Monitor Can Do

    Get PDF
    Existing notions of monitorability for branching-time properties are fairly restrictive. This, in turn, impacts the ability to incorporate prior knowledge about the system under scrutiny - which corresponds to a branching-time property - into the runtime analysis. We propose a definition of optimal monitors that verify the best monitorable under- or over-approximation of a specification, regardless of its monitorability status. Optimal monitors can be obtained for arbitrary branching-time properties by synthesising a sound and complete monitor for their strongest monitorable consequence. We show that the strongest monitorable consequence of specifications expressed in Hennessy-Milner logic with recursion is itself expressible in this logic, and present a procedure to find it. Our procedure enables prior knowledge to be optimally incorporated into runtime monitors

    DFAMiner: Mining minimal separating DFAs from labelled samples

    Get PDF
    We propose DFAMiner, a passive learning tool for learning minimal separating deterministic finite automata (DFA) from a set of labelled samples. Separating automata are an interesting class of automata that occurs generally in regular model checking and has raised interest in foundational questions of parity game solving. We first propose a simple and linear-time algorithm that incrementally constructs a three-valued DFA (3DFA) from a set of labelled samples given in the usual lexicographical order. This 3DFA has accepting and rejecting states as well as don't-care states, so that it can exactly recognise the labelled examples. We then apply our tool to mining a minimal separating DFA for the labelled samples by minimising the constructed automata via a reduction to solving SAT problems. Empirical evaluation shows that our tool outperforms current state-of-the-art tools significantly on standard benchmarks for learning minimal separating DFAs from samples. Progress in the efficient construction of separating DFAs can also lead to finding the lower bound of parity game solving, where we show that DFAMiner can create optimal separating automata for simple languages with up to 7 colours. Future improvements might offer inroads to better data structures

    Incomplete SMT techniques for solving non-linear formulas over the integers

    Get PDF
    We present new methods for solving the Satisfiability Modulo Theories problem over the theory of QuantifierFree Non-linear Integer Arithmetic, SMT(QF-NIA), which consists of deciding the satisfiability of ground formulas with integer polynomial constraints. Following previous work, we propose to solve SMT(QF-NIA) instances by reducing them to linear arithmetic: non-linear monomials are linearized by abstracting them with fresh variables and by performing case splitting on integer variables with finite domain. For variables that do not have a finite domain, we can artificially introduce one by imposing a lower and an upper bound and iteratively enlarge it until a solution is found (or the procedure times out). The key for the success of the approach is to determine, at each iteration, which domains have to be enlarged. Previously, unsatisfiable cores were used to identify the domains to be changed, but no clue was obtained as to how large the new domains should be. Here, we explain two novel ways to guide this process by analyzing solutions to optimization problems: (i) to minimize the number of violated artificial domain bounds, solved via a Max-SMT solver, and (ii) to minimize the distance with respect to the artificial domains, solved via an Optimization Modulo Theories (OMT) solver. Using this SMT-based optimization technology allows smoothly extending the method to also solve Max-SMT problems over non-linear integer arithmetic. Finally, we leverage the resulting Max-SMT(QF-NIA) techniques to solve ∃∀ formulas in a fragment of quantified non-linear arithmetic that appears commonly in verification and synthesis applications.Peer ReviewedPostprint (author's final draft

    Unmonitorability of Artificial Intelligence

    Get PDF
    Artificially Intelligent (AI) systems have ushered in a transformative era across various domains, yet their inherent traits of unpredictability, unexplainability, and uncontrollability have given rise to concerns surrounding AI safety. This paper aims to demonstrate the infeasibility of accurately monitoring advanced AI systems to predict the emergence of certain capabilities prior to their manifestation. Through an analysis of the intricacies of AI systems, the boundaries of human comprehension, and the elusive nature of emergent behaviors, we argue for the impossibility of reliably foreseeing some capabilities. By investigating these impossibility results, we shed light on their potential implications for AI safety research and propose potential strategies to overcome these limitations
    corecore