1,499 research outputs found

    Dependability Analysis of Control Systems using SystemC and Statistical Model Checking

    Get PDF
    Stochastic Petri nets are commonly used for modeling distributed systems in order to study their performance and dependability. This paper proposes a realization of stochastic Petri nets in SystemC for modeling large embedded control systems. Then statistical model checking is used to analyze the dependability of the constructed model. Our verification framework allows users to express a wide range of useful properties to be verified which is illustrated through a case study

    Reliability models for dataflow computer systems

    Get PDF
    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers

    An Evaluation Framework for Comparative Analysis of Generalized Stochastic Petri Net Simulation Techniques

    Get PDF
    Availability of a common, shared benchmark to provide repeatable, quantifiable, and comparable results is an added value for any scientific community. International consortia provide benchmarks in a wide range of domains, being normally used by industry, vendors, and researchers for evaluating their software products. In this regard, a benchmark of untimed Petri net models was developed to be used in a yearly software competition driven by the Petri net community. However, to the best of our knowledge there is not a similar benchmark to evaluate solution techniques for Petri nets with timing extensions. In this paper, we propose an evaluation framework for the comparative analysis of generalized stochastic Petri nets (GSPNs) simulation techniques. Although we focus on simulation techniques, our framework provides a baseline for a comparative analysis of different GSPN solvers (e.g., simulators, numerical solvers, or other techniques). The evaluation framework encompasses a set of 50 GSPN models including test cases and case studies from the literature, and a set of evaluation guidelines for the comparative analysis. In order to show the applicability of the proposed framework, we carry out a comparative analysis of steady-state simulators implemented in three academic software tools, namely, GreatSPN, PeabraiN, and TimeNET. The results allow us to validate the trustfulness of these academic software tools, as well as to point out potential problems and algorithmic optimization opportunities

    Exploring the concept of interaction computing through the discrete algebraic analysis of the Belousov–Zhabotinsky reaction

    Get PDF
    Interaction computing (IC) aims to map the properties of integrable low-dimensional non-linear dynamical systems to the discrete domain of finite-state automata in an attempt to reproduce in software the self-organizing and dynamically stable properties of sub-cellular biochemical systems. As the work reported in this paper is still at the early stages of theory development it focuses on the analysis of a particularly simple chemical oscillator, the Belousov-Zhabotinsky (BZ) reaction. After retracing the rationale for IC developed over the past several years from the physical, biological, mathematical, and computer science points of view, the paper presents an elementary discussion of the Krohn-Rhodes decomposition of finite-state automata, including the holonomy decomposition of a simple automaton, and of its interpretation as an abstract positional number system. The method is then applied to the analysis of the algebraic properties of discrete finite-state automata derived from a simplified Petri net model of the BZ reaction. In the simplest possible and symmetrical case the corresponding automaton is, not surprisingly, found to contain exclusively cyclic groups. In a second, asymmetrical case, the decomposition is much more complex and includes five different simple non-abelian groups whose potential relevance arises from their ability to encode functionally complete algebras. The possible computational relevance of these findings is discussed and possible conclusions are drawn

    List of requirements on formalisms and selection of appropriate tools

    Get PDF
    This deliverable reports on the activities for the set-up of the modelling environments for the evaluation activities of WP5. To this objective, it reports on the identified modelling peculiarities of the electric power infrastructure and the information infrastructures and of their interdependencies, recalls the tools that have been considered and concentrates on the tools that are, and will be, used in the project: DrawNET, DEEM and EPSys which have been developed before and during the project by the partners, and M\uf6bius and PRISM, developed respectively at the University of Illinois at Urbana Champaign and at the University of Birmingham (and recently at the University of Oxford)

    A systematic approach for performance assessment using process mining. An industrial experience report

    Get PDF
    Software performance engineering is a mature field that offers methods to assess system performance. Process mining is a promising research field applied to gain insight on system processes. The interplay of these two fields opens promising applications in the industry. In this work, we report our experience applying a methodology, based on process mining techniques, for the performance assessment of a commercial data-intensive software application. The methodology has successfully assessed the scalability of future versions of this system. Moreover, it has identified bottlenecks components and replication needs for fulfilling business rules. The system, an integrated port operations management system, has been developed by Prodevelop, a medium-sized software enterprise with high expertise in geospatial technologies. The performance assessment has been carried out by a team composed by practitioners and researchers. Finally, the paper offers a deep discussion on the lessons learned during the experience, that will be useful for practitioners to adopt the methodology and for researcher to find new routes

    Performance Prediction of Cloud-Based Big Data Applications

    Get PDF
    Big data analytics have become widespread as a means to extract knowledge from large datasets. Yet, the heterogeneity and irregular- ity usually associated with big data applications often overwhelm the existing software and hardware infrastructures. In such con- text, the exibility and elasticity provided by the cloud computing paradigm o er a natural approach to cost-e ectively adapting the allocated resources to the application’s current needs. However, these same characteristics impose extra challenges to predicting the performance of cloud-based big data applications, a key step to proper management and planning. This paper explores three modeling approaches for performance prediction of cloud-based big data applications. We evaluate two queuing-based analytical models and a novel fast ad hoc simulator in various scenarios based on di erent applications and infrastructure setups. The three ap- proaches are compared in terms of prediction accuracy, nding that our best approaches can predict average application execution times with 26% relative error in the very worst case and about 7% on average
    • 

    corecore