766 research outputs found

    Process Algebras

    Get PDF
    Process Algebras are mathematically rigorous languages with well defined semantics that permit describing and verifying properties of concurrent communicating systems. They can be seen as models of processes, regarded as agents that act and interact continuously with other similar agents and with their common environment. The agents may be real-world objects (even people), or they may be artifacts, embodied perhaps in computer hardware or software systems. Many different approaches (operational, denotational, algebraic) are taken for describing the meaning of processes. However, the operational approach is the reference one. By relying on the so called Structural Operational Semantics (SOS), labelled transition systems are built and composed by using the different operators of the many different process algebras. Behavioral equivalences are used to abstract from unwanted details and identify those systems that react similarly to external experiments

    Kahn Process Networks and a Reactive Extension

    Full text link
    Kahn and MacQueen have introduced a generic class of determinate asynchronous data-flow applications, called Kahn Process Networks (KPNs) with an elegant mathematical model and semantics in terms of Scott-continuous functions on data streams together with an implementation model of independent asynchronous sequential programs communicating through FIFO buffers with blocking read and non-blocking write operations. The two are related by the Kahn Principle which states that a realization according to the implementation model behaves as predicted by the mathematical function. Additional steps are required to arrive at an actual implementation of a KPN to take care of scheduling of independent processes on a single processor and to manage communication buffers. Because of the expressiveness of the KPN model, buffer sizes and schedules cannot be determined at design time in general and require dynamic run-time system support. Constraints are discussed that need to be placed on such system support so as to maintain the Kahn Principle.We then discuss a possible extension of the KPN model to include the possibility for sporadic, reactive behavior which is not possible in the standard model. The extended model is called Reactive Process Networks. We introduce its semantics, look at analyzability and at more constrained data-flow models combined with reactive behavior

    Dynamic re-optimization techniques for stream processing engines and object stores

    Get PDF
    Large scale data storage and processing systems are strongly motivated by the need to store and analyze massive datasets. The complexity of a large class of these systems is rooted in their distributed nature, extreme scale, need for real-time response, and streaming nature. The use of these systems on multi-tenant, cloud environments with potential resource interference necessitates fine-grained monitoring and control. In this dissertation, we present efficient, dynamic techniques for re-optimizing stream-processing systems and transactional object-storage systems.^ In the context of stream-processing systems, we present VAYU, a per-topology controller. VAYU uses novel methods and protocols for dynamic, network-aware tuple-routing in the dataflow. We show that the feedback-driven controller in VAYU helps achieve high pipeline throughput over long execution periods, as it dynamically detects and diagnoses any pipeline-bottlenecks. We present novel heuristics to optimize overlays for group communication operations in the streaming model.^ In the context of object-storage systems, we present M-Lock, a novel lock-localization service for distributed transaction protocols on scale-out object stores to increase transaction throughput. Lock localization refers to dynamic migration and partitioning of locks across nodes in the scale-out store to reduce cross-partition acquisition of locks. The service leverages the observed object-access patterns to achieve lock-clustering and deliver high performance. We also present TransMR, a framework that uses distributed, transactional object stores to orchestrate and execute asynchronous components in amorphous data-parallel applications on scale-out architectures

    Runtime Enforcement for Component-Based Systems

    Get PDF
    Runtime enforcement is an increasingly popular and effective dynamic validation technique aiming to ensure the correct runtime behavior (w.r.t. a formal specification) of systems using a so-called enforcement monitor. In this paper we introduce runtime enforcement of specifications on component-based systems (CBS) modeled in the BIP (Behavior, Interaction and Priority) framework. BIP is a powerful and expressive component-based framework for formal construction of heterogeneous systems. However, because of BIP expressiveness, it remains difficult to enforce at design-time complex behavioral properties. First we propose a theoretical runtime enforcement framework for CBS where we delineate a hierarchy of sets of enforceable properties (i.e., properties that can be enforced) according to the number of observational steps a system is allowed to deviate from the property (i.e., the notion of k-step enforceability). To ensure the observational equivalence between the correct executions of the initial system and the monitored system, we show that i) only stutter-invariant properties should be enforced on CBS with our monitors, ii) safety properties are 1-step enforceable. Given an abstract enforcement monitor (as a finite-state machine) for some 1-step enforceable specification, we formally instrument (at relevant locations) a given BIP system to integrate the monitor. At runtime, the monitor observes and automatically avoids any error in the behavior of the system w.r.t. the specification. Our approach is fully implemented in an available tool that we used to i) avoid deadlock occurrences on a dining philosophers benchmark, and ii) ensure the correct placement of robots on a map.Comment: arXiv admin note: text overlap with arXiv:1109.5505 by other author

    Quantifying the Parallelism in BPMN Processes using Model Checking

    No full text
    International audienceA business process is a set of structured, related activities that aims at fulfilling a specific organizational goal for a customer or market. An important metric when developing a business process is its degree of parallelism, i.e., the maximum number of tasks that are executable in parallel in that process. The degree of parallelism determines the peak demand on tasks, providing a valuable guide for the problem of resource allocation in business processes. In this paper, we investigate how to automatically measure the degree of parallelism for business processes, described using the BPMN standard notation. We first present a formal model for BPMN processes in terms of Labelled Transition Systems, which are obtained through process algebra encodings. We then propose an approach for automatically computing the degree of parallelism by using model checking techniques and dichotomic search. We implemented a tool for automating this check and we applied it successfully to more than one hundred BPMN processes

    Foundations of Modular SOS

    Get PDF
    A novel form of labelled transition system is proposed, wherethe labels are the arrows of a category, and adjacent labels in computations are required to be composable. Such transition systems provide thefoundations for modular SOS descriptions of programming languages.Three fundamental ways of transforming label categories, analogous tomonad transformers, are provided, and it is shown that their applicationspreserve computations in modular SOS. The approach is illustrated withfragments taken from a modular SOS for ML concurrency primitives

    Application-Layer Connector Synthesis

    Full text link
    International audienceThe heterogeneity characterizing the systems populating the Ubiquitous Computing environment prevents their seamless interoperability. Heterogeneous protocols may be willing to cooperate in order to reach some common goal even though they meet dynamically and do not have a priori knowledge of each other. Despite numerous e orts have been done in the literature, the automated and run-time interoperability is still an open challenge for such environment. We consider interoperability as the ability for two Networked Systems (NSs) to communicate and correctly coordinate to achieve their goal(s). In this chapter we report the main outcomes of our past and recent research on automatically achieving protocol interoperability via connector synthesis. We consider application-layer connectors by referring to two conceptually distinct notions of connector: coordinator and mediator. The former is used when the NSs to be connected are already able to communicate but they need to be speci cally coordinated in order to reach their goal(s). The latter goes a step forward representing a solution for both achieving correct coordination and enabling communication between highly heterogeneous NSs. In the past, most of the works in the literature described e orts to the automatic synthesis of coordinators while, in recent years the focus moved also to the automatic synthesis of mediators. Within the Connect project, by considering our past experience on automatic coordinator synthesis as a baseline, we propose a formal theory of mediators and a related method for automatically eliciting a way for the protocols to interoperate. The solution we propose is the automated synthesis of emerging mediating connectors (i.e., mediators for short)

    Synthesis of behavioral models from scenarios

    No full text
    corecore