5,074 research outputs found

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Fault tolerant architectures for integrated aircraft electronics systems, task 2

    Get PDF
    The architectural basis for an advanced fault tolerant on-board computer to succeed the current generation of fault tolerant computers is examined. The network error tolerant system architecture is studied with particular attention to intercluster configurations and communication protocols, and to refined reliability estimates. The diagnosis of faults, so that appropriate choices for reconfiguration can be made is discussed. The analysis relates particularly to the recognition of transient faults in a system with tasks at many levels of priority. The demand driven data-flow architecture, which appears to have possible application in fault tolerant systems is described and work investigating the feasibility of automatic generation of aircraft flight control programs from abstract specifications is reported

    Service discovery and negotiation with COWS

    Get PDF
    To provide formal foundations to current (web) services technologies, we put forward using COWS, a process calculus for specifying, combining and analysing services, as a uniform formalism for modelling all the relevant phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, deployment and execution. In this paper, we show that constraints and operations on them can be smoothly incorporated in COWS, and propose a disciplined way to model multisets of constraints and to manipulate them through appropriate interaction protocols. Therefore, we demonstrate that also QoS requirement specifications and SLA achievements, and the phases of dynamic service discovery and negotiation can be comfortably modelled in COWS. We illustrate our approach through a scenario for a service-based web hosting provider

    On Modelling and Analysis of Dynamic Reconfiguration of Dependable Real-Time Systems

    Full text link
    This paper motivates the need for a formalism for the modelling and analysis of dynamic reconfiguration of dependable real-time systems. We present requirements that the formalism must meet, and use these to evaluate well established formalisms and two process algebras that we have been developing, namely, Webpi and CCSdp. A simple case study is developed to illustrate the modelling power of these two formalisms. The paper shows how Webpi and CCSdp represent a significant step forward in modelling adaptive and dependable real-time systems.Comment: Presented and published at DEPEND 201

    UniFlexView : a unified framework for consistent construction of BPMN and BPEL process views

    Get PDF
    Process view technologies allow organizations to create different granularity levels of abstraction of their business processes, therefore enabling a more effective business process management, analysis, interoperation, and privacy controls. Existing research proposed view construction and abstraction techniques for block-based (ie, BPEL) and graph-based (ie, BPMN) process models. However, the existing techniques treat each type of the two types of models separately. Especially, this brings in challenges for achieving a consistent process view for a BPEL model that derives from a BPMN model. In this paper, we propose a unified framework, namely UniFlexView, for supporting automatic and consistent process view construction. With our framework, process modelers can use our proposed View Definition Language to specify their view construction requirements disregarding the types of process models. Our UniFlexView's system prototype has been developed as a proof of concept and demonstration of the usability and feasibility of our framework. © 2019 John Wiley & Sons, Ltd
    corecore