1,295 research outputs found

    An MML-based tool for evaluating the complexity of (stochastic) logic programs

    Full text link
    [ES] Esta tesis presenta el primer esquema de codificación MML para evaluar la simplicidad de modelos expresados en forma de programas lógicos estocásticos, así como su herramienta escrita en Prolog.[EN] The thesis presents the first general MML coding scheme for evaluating the simplicity of models expressed as stochastic logic programs, as a tool in Prolog.Castillo Andreu, H. (2012). An MML-based tool for evaluating the complexity of (stochastic) logic programs. http://hdl.handle.net/10251/17983Archivo delegad

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access two-volume set constitutes the proceedings of the 26th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The total of 60 regular papers presented in these volumes was carefully reviewed and selected from 155 submissions. The papers are organized in topical sections as follows: Part I: Program verification; SAT and SMT; Timed and Dynamical Systems; Verifying Concurrent Systems; Probabilistic Systems; Model Checking and Reachability; and Timed and Probabilistic Systems. Part II: Bisimulation; Verification and Efficiency; Logic and Proof; Tools and Case Studies; Games and Automata; and SV-COMP 2020

    Runtime Verification of Temporal Properties over Out-of-order Data Streams

    Full text link
    We present a monitoring approach for verifying systems at runtime. Our approach targets systems whose components communicate with the monitors over unreliable channels, where messages can be delayed or lost. In contrast to prior works, whose property specification languages are limited to propositional temporal logics, our approach handles an extension of the real-time logic MTL with freeze quantifiers for reasoning about data values. We present its underlying theory based on a new three-valued semantics that is well suited to soundly and completely reason online about event streams in the presence of message delay or loss. We also evaluate our approach experimentally. Our prototype implementation processes hundreds of events per second in settings where messages are received out of order.Comment: long version of the CAV 2017 pape

    Automated simulation and verification of process models discovered by process mining

    Get PDF
    This paper presents a novel approach for automated analysis of process models discovered using process mining techniques. Process mining explores underlying processes hidden in the event data generated by various devices. Our proposed Inductive machine learning method was used to build business process models based on actual event log data obtained from a hotel\u27s Property Management System (PMS). The PMS can be considered as a Multi Agent System (MAS) because it is integrated with a variety of external systems and IoT devices. Collected event log combines data on guests stay recorded by hotel staff, as well as data streams captured from telephone exchange and other external IoT devices. Next, we performed automated analysis of the discovered process models using formal methods. Spin model checker was used to simulate process model executions and automatically verify the process model. We proposed an algorithm for the automatic transformation of the discovered process model into a verification model. Additionally, we developed a generator of positive and negative examples. In the verification stage, we have also used Linear temporal logic (LTL) to define requested system specifications. We find that the analysis results will be well suited for process model repair

    An automatic documentation generator for (C)LP - Reference manual

    Full text link
    Lpdoc is an automatic program documentation generator for (C)LP systems. Lpdoc generates a reference manual automatically from one or more source files for a logic program (including ISO-Prolog, Ciao, many CLP systems, ...). It is particularly useful for documenting library modules, for which it automatically generates a description of the module interface. However, lpdoc can also be used quite successfully to document full applications and to generate nicely formatted plain ascii "readme" files. A fundamental advantage of using lpdoc to document programs is that it is much easier to maintain a true correspondence between the program and its documentation, and to identify precisely to what version of the program a given printed manual corresponds. The quality of the documentation generated can be greatly enhanced by including within the program text: • assertions (types, modes, etc. ...) for the predicates in the program, and • machine-readable comments (in the "literate programming" style). The assertions and comments included in the source file need to be written using the Ciao system assertion language. A simple compatibility library is available to make traditional (constraint) logic programming systems ignore these assertions and comments allowing normal treatment of programs documented in this way. The documentation is currently generated in HTML or texinf o format. From the texinf o output, printed and on-line manuals in several formats (dvi, ps, info, etc.) can be easily generated automatically, using publicly available tools, lpdoc can also generate 'man' pages (Unix man page format) as well as brief descriptions in html or emacs info formats suitable for inclusion in an on-line index of applications. In particular, lpdoc can create and maintain fully automatically WWW and info sites containing on-line versions of the documents it produces. The lpdoc manual (and the Ciao system manuals) are generated by lpdoc. Lpdoc is distributed under the GNU general public license. Note: lpdoc is fully supported on Linux, Mac OS X, and other Un*x-like systems. Due to the use of several Un*x-related utilities, some documentation back-ends may require Cygwin under Win32. This documentation corresponds to version 3.0 (2011/7/7, 16:33:15 CEST)

    Caching, crashing & concurrency - verification under adverse conditions

    Get PDF
    The formal development of large-scale software systems is a complex and time-consuming effort. Generally, its main goal is to prove the functional correctness of the resulting system. This goal becomes significantly harder to reach when the verification must be performed under adverse conditions. When aiming for a realistic system, the implementation must be compatible with the “real world”: it must work with existing system interfaces, cope with uncontrollable events such as power cuts, and offer competitive performance by using mechanisms like caching or concurrency. The Flashix project is an example of such a development, in which a fully verified file system for flash memory has been developed. The project is a long-term team effort and resulted in a sequential, functionally correct and crash-safe implementation after its first project phase. This thesis continues the work by performing modular extensions to the file system with performance-oriented mechanisms that mainly involve caching and concurrency, always considering crash-safety. As a first contribution, this thesis presents a modular verification methodology for destructive heap algorithms. The approach simplifies the verification by separating reasoning about specifics of heap implementations, like pointer aliasing, from the reasoning about conceptual correctness arguments. The second contribution of this thesis is a novel correctness criterion for crash-safe, cached, and concurrent file systems. A natural criterion for crash-safety is defined in terms of system histories, matching the behavior of fine-grained caches using complex synchronization mechanisms that reorder operations. The third contribution comprises methods for verifying functional correctness and crash-safety of caching mechanisms and concurrency in file systems. A reference implementation for crash-safe caches of high-level data structures is given, and a strategy for proving crash-safety is demonstrated and applied. A compatible concurrent implementation of the top layer of file systems is presented, using a mechanism for the efficient management of fine-grained file locking, and a concurrent version of garbage collection is realized. Both concurrency extensions are proven to be correct by applying atomicity refinement, a methodology for proving linearizability. Finally, this thesis contributes a new iteration of executable code for the Flashix file system. With the efficiency extensions introduced with this thesis, Flashix covers all performance-oriented concepts of realistic file system implementations and achieves competitiveness with state-of-the-art flash file systems

    Acta Cybernetica : Volume 15. Number 4.

    Get PDF

    Tools and Algorithms for the Construction and Analysis of Systems

    Get PDF
    This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
    • …
    corecore