667 research outputs found

    A survey of petri nets slicing

    Get PDF
    Petri nets slicing is a technique that aims to improve the verification of systems modeled in Petri nets. Petri nets slicing was first developed to facilitate debugging but then used for the alleviation of the state space explosion problem for the model checking of Petri nets. In this article, different slicing techniques are studied along with their algorithms introducing: i) a classification of Petri nets slicing algorithms based on their construction methodology and objective (such as improving state space analysis or testing), ii) a qualitative and quantitative discussion and comparison of major differences such as accuracy and efficiency, iii) a syntactic unification of slicing algorithms that improve state space analysis for easy and clear understanding, and iv) applications of slicing for multiple perspectives. Furthermore, some recent improvements to slicing algorithms are presented, which can certainly reduce the slice size even for strongly connected nets. A noteworthy use of this survey is for the selection and improvement of slicing techniques for optimizing the verification of state event models

    Maximal and minimal dynamic Petri net slicing

    Full text link
    Context: Petri net slicing is a technique to reduce the size of a Petri net so that it can ease the analysis or understanding of the original Petri net. Objective: Presenting two new Petri net slicing algorithms to isolate those places and transitions of a Petri net (the slice) that may contribute tokens to one or more places given (the slicing criterion). Method: The two algorithms proposed are formalized. The completeness of the first algorithm and the minimality of the second algorithm are formally proven. Both algorithms together with other three state-of-the-art algorithms have been implemented and integrated into a single tool so that we have been able to carry out a fair empirical evaluation. Results: Besides the two new Petri net slicing algorithms, a public, free, and open-source implementation of five algorithms is reported. The results of an empirical evaluation of the new algorithms and the slices that they produce are also presented. Conclusions: The first algorithm collects all places and transitions that may influence (in any computation) the slicing criterion, while the second algorithm collects a minimum set of places and transitions that may influence (in some computation) the slicing criterion. Therefore, the net computed by the first algorithm can reproduce any computation that contributes tokens to any place of interest. In contrast, the second algorithm loses this possibility but it often produces a much more reduced subnet (which still can reproduce some computations that contribute tokens to some places of interest). The first algorithm is proven complete, and the second one is proven minimal

    Parallel symbolic state-space exploration is difficult, but what is the alternative?

    Full text link
    State-space exploration is an essential step in many modeling and analysis problems. Its goal is to find the states reachable from the initial state of a discrete-state model described. The state space can used to answer important questions, e.g., "Is there a dead state?" and "Can N become negative?", or as a starting point for sophisticated investigations expressed in temporal logic. Unfortunately, the state space is often so large that ordinary explicit data structures and sequential algorithms cannot cope, prompting the exploration of (1) parallel approaches using multiple processors, from simple workstation networks to shared-memory supercomputers, to satisfy large memory and runtime requirements and (2) symbolic approaches using decision diagrams to encode the large structured sets and relations manipulated during state-space generation. Both approaches have merits and limitations. Parallel explicit state-space generation is challenging, but almost linear speedup can be achieved; however, the analysis is ultimately limited by the memory and processors available. Symbolic methods are a heuristic that can efficiently encode many, but not all, functions over a structured and exponentially large domain; here the pitfalls are subtler: their performance varies widely depending on the class of decision diagram chosen, the state variable order, and obscure algorithmic parameters. As symbolic approaches are often much more efficient than explicit ones for many practical models, we argue for the need to parallelize symbolic state-space generation algorithms, so that we can realize the advantage of both approaches. This is a challenging endeavor, as the most efficient symbolic algorithm, Saturation, is inherently sequential. We conclude by discussing challenges, efforts, and promising directions toward this goal

    Program Dependence Net and On-demand Slicing for Property Verification of Concurrent System and Software

    Full text link
    When checking concurrent software using a finite-state model, we face a formidable state explosion problem. One solution to this problem is dependence-based program slicing, whose use can effectively reduce verification time. It is orthogonal to other model-checking reduction techniques. However, when slicing concurrent programs for model checking, there are conversions between multiple irreplaceable models, and dependencies need to be found for variables irrelevant to the verified property, which results in redundant computation. To resolve this issue, we propose a Program Dependence Net (PDNet) based on Petri net theory. It is a unified model that combines a control-flow structure with dependencies to avoid conversions. For reduction, we present a PDNet slicing method to capture the relevant variables' dependencies when needed. PDNet in verifying linear temporal logic and its on-demand slicing can be used to significantly reduce computation cost. We implement a model-checking tool based on PDNet and its on-demand slicing, and validate the advantages of our proposed methods.Comment: 17 pages, 3 figure

    Towards a Layered Architectural View for Security Analysis in SCADA Systems

    Full text link
    Supervisory Control and Data Acquisition (SCADA) systems support and control the operation of many critical infrastructures that our society depend on, such as power grids. Since SCADA systems become a target for cyber attacks and the potential impact of a successful attack could lead to disastrous consequences in the physical world, ensuring the security of these systems is of vital importance. A fundamental prerequisite to securing a SCADA system is a clear understanding and a consistent view of its architecture. However, because of the complexity and scale of SCADA systems, this is challenging to acquire. In this paper, we propose a layered architectural view for SCADA systems, which aims at building a common ground among stakeholders and supporting the implementation of security analysis. In order to manage the complexity and scale, we define four interrelated architectural layers, and uses the concept of viewpoints to focus on a subset of the system. We indicate the applicability of our approach in the context of SCADA system security analysis.Comment: 7 pages, 4 figure

    Analysis Techniques for Concurrent Programming Languages

    Full text link
    Los lenguajes concurrentes est an cada d a m as presentes en nuestra sociedad, tanto en las nuevas tecnolog as como en los sistemas utilizados de manera cotidiana. M as a un, dada la actual distribuci on de los sistemas y su arquitectura interna, cabe esperar que este hecho siga siendo una realidad en los pr oximos a~nos. En este contexto, el desarrollo de herramientas de apoyo al desarrollo de programas concurrentes se vuelve esencial. Adem as, el comportamiento de los sistemas concurrentes es especialmente dif cil de analizar, por lo que cualquier herramienta que ayude en esta tarea, a un cuando sea limitada, ser a de gran utilidad. Por ejemplo, podemos encontrar herramientas para la depuraci on, an alisis, comprobaci on, optimizaci on, o simpli caci on de programas. Muchas de ellas son ampliamente utilizadas por los programadores hoy en d a. El prop osito de esta tesis es introducir, a trav es de diferentes lenguajes de programaci on concurrentes, t ecnicas de an alisis que puedan ayudar a mejorar la experiencia del desarrollo y publicaci on de software para modelos concurrentes. En esta tesis se introducen tanto an alisis est aticos (aproximando todas las posibles ejecuciones) como din amicos (considerando una ejecuci on en concreto). Los trabajos aqu propuestos di eren lo su ciente entre s para constituir ideas totalmente independientes, pero manteniendo un nexo com un: el hecho de ser un an alisis para un lenguaje concurrente. Todos los an alisis presentados han sido de nidos formalmente y se ha probado su correcci on, asegurando que los resultados obtenidos tendr an el grado de abilidad necesario en sistemas que lo requieran, como por ejemplo, en sistemas cr ticos. Adem as, se incluye la descripci on de las herramientas software que implementan las diferentes ideas propuestas. Esto le da al trabajo una utilidad m as all a del marco te orico, permitiendo poner en pr actica y probar con ejemplos reales los diferentes an alisis. Todas las ideas aqu presentadas constituyen, por s mismas, propuestas aplicables en multitud de contextos y problemas actuales. Adem as, individualmente sirven de punto de partida para otros an alisis derivados, as como para la adaptaci on a otros lenguajes de la misma familia. Esto le da un valor a~nadido a este trabajo, como bien atestiguan algunos trabajos posteriores que ya se est an bene ciando de los resultados obtenidos en esta tesis.Concurrent languages are increasingly present in our society, both in new technologies and in the systems used on a daily basis. Moreover, given the current systems distribution and their internal architecture, one can expect that this remains so in the coming years. In this context, the development of tools to support the implementation of concurrent programs becomes essential. Futhermore, the behavior of concurrent systems is particularly difficult to analyse, so that any tool that helps in this task, even if in a limited way, will be very useful. For example, one can find tools for debugging, analysis, testing, optimisation, or simplification of programs, which are widely used by programmers nowadays. The purpose of this thesis is to introduce, through various concurrent programming languages, some analysis techniques that can help to improve the experience of the software development and release for concurrent models. This thesis introduces both static (approximating all possible executions) and dynamic (considering a specific execution) analysis. The topics considered here differ enough from each other to be fully independent. Nevertheless, they have a common link: they can be used to analyse properties of a concurrent programming language. All the analyses presented here have been formally defined and their correctness have been proved, ensuring that the results will have the reliability degree which is needed for some systems (for instance, for critical systems). It also includes a description of the software tools that implement the different ideas proposed. This gives the work a usefulness well beyond the theoretical aspect, allowing us to put it in practice and to test the different analyses with real-world examples All the ideas here presented are, by themselves, approaches that can be applied in many current contexts and problems. Moreover, individually they serve as a starting point for other derived analysis, as well as for the adaptation to other languages of the same family. This gives an added value to this work, a fact confirmed by some later works that are already benefiting from the results obtained in this thesis.Tamarit Muñoz, S. (2013). Analysis Techniques for Concurrent Programming Languages [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/31651TESI

    Assessing the Risk of an Adaptation using Prior Compliance Verification

    Get PDF
    Autonomous systems must respond to large amounts of streaming information. They also must comply with critical properties to maintain behavior guarantees. Compliance is especially important when a system self-adapts to perform a repair, improve performance, or modify decisions. There remain significant challenges assessing the risk of adaptations that are dynamically configured at runtime with respect to critical property compliance. Assuming compliance verification was performed for the originally deployed system, the proof process holds valuable meta-data about the variables and conditions that impact reusing the proof on the adapted system. We express this meta-data as a verification workflow using Colored Petri Nets. As dynamic adaptations are configured, the Petri Nets produce alert tokens suggesting the potential proof reuse impact of an adaptation. Alert tokens hold risk values for use in a utility function to determine the least risky adaptations. We illustrate the modeling and risk assessment using a case study
    corecore