26 research outputs found

    Causal computational complexity of distributed processes

    Get PDF
    This article studies the complexity of π-calculus processes with respect to the quantity of transitions caused by an incoming message. First, we propose a typing system for integrating Bellantoni and Cook's characterisation of polytime computable functions into Deng and Sangiorgi's typing system for termination. We then define computational complexity of distributed messages based on Degano and Priami's causal semantics, which identifies the dependency between interleaved transitions. Next, we apply a necessary syntactic flow analysis to typable processes to ensure a computational bound on the number of distributed messages. We prove that our analysis is decidable; sound in the sense that it guarantees that the total number of messages causally dependent of an input request received from the outside is bounded by a polynomial of the content of this request; and complete, meaning that each polynomial recursive function can be computed by a typable process

    Process Mining Handbook

    Get PDF
    This is an open access book. This book comprises all the single courses given as part of the First Summer School on Process Mining, PMSS 2022, which was held in Aachen, Germany, during July 4-8, 2022. This volume contains 17 chapters organized into the following topical sections: Introduction; process discovery; conformance checking; data preprocessing; process enhancement and monitoring; assorted process mining topics; industrial perspective and applications; and closing

    A decentralized analysis of multiparty protocols

    Get PDF
    Protocols provide the unifying glue in concurrent and distributed software today; verifying that message-passing programs conform to such governing protocols is important but difficult. Static approaches based on multiparty session types (MPST) use protocols as types to avoid protocol violations and deadlocks in programs. An elusive problem for MPST is to ensure both protocol conformance and deadlock-freedom for implementations with interleaved and delegated protocols. We propose a decentralized analysis of multiparty protocols, specified as global types and implemented as interacting processes in an asynchronous π-calculus. Our solution rests upon two novel notions: router processes and relative types. While router processes use the global type to enable the composition of participant implementations in arbitrary process networks, relative types extract from the global type the intended interactions and dependencies between pairs of participants. In our analysis, processes are typed using APCP, a type system that ensures protocol conformance and deadlock-freedom with respect to binary protocols, developed in prior work. Our decentralized, router-based analysis enables the sound and complete transference of protocol conformance and deadlock-freedom from APCP to multiparty protocols

    Programming Languages and Systems

    Get PDF
    This open access book constitutes the proceedings of the 31st European Symposium on Programming, ESOP 2022, which was held during April 5-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 21 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 13371 and 13372 constitutes the refereed proceedings of the 34rd International Conference on Computer Aided Verification, CAV 2022, which was held in Haifa, Israel, in August 2022. The 40 full papers presented together with 9 tool papers and 2 case studies were carefully reviewed and selected from 209 submissions. The papers were organized in the following topical sections: Part I: Invited papers; formal methods for probabilistic programs; formal methods for neural networks; software Verification and model checking; hyperproperties and security; formal methods for hardware, cyber-physical, and hybrid systems. Part II: Probabilistic techniques; automata and logic; deductive verification and decision procedures; machine learning; synthesis and concurrency. This is an open access book

    A Decentralized Analysis of Multiparty Protocols

    Get PDF
    Protocols provide the unifying glue in concurrent and distributed software today; verifying that message-passing programs conform to such governing protocols is important but difficult. Static approaches based on multiparty session types (MPST) use protocols as types to avoid protocol violations and deadlocks in programs. An elusive problem for MPST is to ensure both protocol conformance and deadlock freedom for implementations with interleaved and delegated protocols. We propose a decentralized analysis of multiparty protocols, specified as global types and implemented as interacting processes in an asynchronous π\pi-calculus. Our solution rests upon two novel notions: router processes and relative types. While router processes use the global type to enable the composition of participant implementations in arbitrary process networks, relative types extract from the global type the intended interactions and dependencies between pairs of participants. In our analysis, processes are typed using APCP, a type system that ensures protocol conformance and deadlock freedom with respect to binary protocols, developed in prior work. Our decentralized, router-based analysis enables the sound and complete transference of protocol conformance and deadlock freedom from APCP to multiparty protocols.Comment: revision following anonymous review

    Modelling, Reverse Engineering, and Learning Software Variability

    Get PDF
    The society expects software to deliver the right functionality, in a short amount of time and with fewer resources, in every possible circumstance whatever are the hardware, the operating systems, the compilers, or the data fed as input. For fitting such a diversity of needs, it is common that software comes in many variants and is highly configurable through configuration options, runtime parameters, conditional compilation directives, menu preferences, configuration files, plugins, etc. As there is no one-size-fits-all solution, software variability ("the ability of a software system or artifact to be efficiently extended, changed, customized or configured for use in a particular context") has been studied the last two decades and is a discipline of its own. Though highly desirable, software variability also introduces an enormous complexity due to the combinatorial explosion of possible variants. For example, the Linux kernel has 15000+ options and most of them can have 3 values: "yes", "no", or "module". Variability is challenging for maintaining, verifying, and configuring software systems (Web applications, Web browsers, video tools, etc.). It is also a source of opportunities to better understand a domain, create reusable artefacts, deploy performance-wise optimal systems, or find specialized solutions to many kinds of problems. In many scenarios, a model of variability is either beneficial or mandatory to explore, observe, and reason about the space of possible variants. For instance, without a variability model, it is impossible to establish a sampling strategy that would satisfy the constraints among options and meet coverage or testing criteria. I address a central question in this HDR manuscript: How to model software variability? I detail several contributions related to modelling, reverse engineering, and learning software variability. I first contribute to support the persons in charge of manually specifying feature models, the de facto standard for modeling variability. I develop an algebra together with a language for supporting the composition, decomposition, diff, refactoring, and reasoning of feature models. I further establish the syntactic and semantic relationships between feature models and product comparison matrices, a large class of tabular data. I then empirically investigate how these feature models can be used to test in the large configurable systems with different sampling strategies. Along this effort, I report on the attempts and lessons learned when defining the "right" variability language. From a reverse engineering perspective, I contribute to synthesize variability information into models and from various kinds of artefacts. I develop foundations and methods for reverse engineering feature models from satisfiability formulae, product comparison matrices, dependencies files and architectural information, and from Web configurators. I also report on the degree of automation and show that the involvement of developers and domain experts is beneficial to obtain high-quality models. Thirdly, I contribute to learning constraints and non-functional properties (performance) of a variability-intensive system. I describe a systematic process "sampling, measuring, learning" that aims to enforce or augment a variability model, capturing variability knowledge that domain experts can hardly express. I show that supervised, statistical machine learning can be used to synthesize rules or build prediction models in an accurate and interpretable way. This process can even be applied to huge configuration space, such as the Linux kernel one. Despite a wide applicability and observed benefits, I show that each individual line of contributions has limitations. I defend the following answer: a supervised, iterative process (1) based on the combination of reverse engineering, modelling, and learning techniques; (2) capable of integrating multiple variability information (eg expert knowledge, legacy artefacts, dynamic observations). Finally, this work opens different perspectives related to so-called deep software variability, security, smart build of configurations, and (threats to) science
    corecore