33 research outputs found

    Using the probabilistic evaluation tool for the analytical solution of large Markov models

    Get PDF
    Stochastic Petri net-based Markov modeling is a potentially very powerful and generic approach for evaluating the performance and dependability of many different systems, such as computer systems, communication networks, manufacturing systems, etc. As a consequence of their general applicability, SPN-based Markov models form the basic solution approach for several software packages that have been developed for the analytic solution of performance and dependability models. In these tools, stochastic Petri nets are used to conveniently specify complicated models, after which an automatic mapping can be carried out to an underlying Markov reward model. Subsequently, this Markov reward model is solved by specialized solution algorithms, appropriately selected for the measure of interest. One of the major aspects that hampers the use of SPN-based Markov models for the analytic solution of performance and dependability results is the size of the state space. Although typically models of up to a few hundred thousand states can conveniently be solved on modern-day work-stations, often even larger models are required to represent all the desired detail of the system. Our tool PET (probabilistic evaluation tool) circumvents problems of large state spaces when the desired performance and dependability measure are transient measures. It does so by an approach named probabilistic evaluatio

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Service-level availability estimation of GPRS

    Full text link

    Performance Analysis of a Consensus Algorithm Combining Stochastic Activity Networks and Measurements

    Get PDF
    A. Coccoli, P. Urban, A. Bondavalli, and A. Schiper. Performance analysis of a consensus algorithm combining Stochastic Activity Networks and measurements. In Proc. Int'l Conf. on Dependable Systems and Networks (DSN), pages 551-560, Washington, DC, USA, June 2002. Protocols which solve agreement problems are essential building blocks for fault tolerant distributed applications. While many protocols have been published, little has been done to analyze their performance. This paper represents a starting point for such studies, by focusing on the consensus problem, a problem related to most other agreement problems. The paper analyzes the latency of a consensus algorithm designed for the asynchronous model with failure detectors, by combining experiments on a cluster of PCs and simulation using Stochastic Activity Networks. We evaluated the latency in runs (1) with no failures nor failure suspicions, (2) with failures but no wrong suspicions and (3) with no failures but with (wrong) failure suspicions. We validated the adequacy and the usability of the Stochastic Activity Network model by comparing experimental results with those obtained from the model. This has led us to identify limitations of the model and the measurements, and suggests new directions for evaluating the performance of agreement protocols. Keywords: quantitative analysis, distributed consensus, failure detectors, Stochastic Activity Networks, measurement

    An object-oriented database for the compilation of signal transduction pathways

    Get PDF
    Transpath ist ein Informationssystem fuer Signaltransduktionsnetze. Der Fokus liegt auf Signalpfaden und -kaskaden, die an der Regulation von Transkriptionsfaktoren beteiligt sind. Molekuele und Reaktionenen werden als Knoten in einem Signalgraphen aufgefasst und, zusammen mit Informationen ueber ihre Lokalisation, Qualitaet, Familienhierarchien und Signalmotive, in einer objekt-orientierten Databank gespeichert. Weiterhin werden Verweise zu anderen Datenbanken und der Originalliteratur gespeichert. Transpath unterscheidet zwischen den Zuständen eines Signalmoleküles und kann die Reaktionsmechanismen der Signalinteraktionen angemessen beschreiben. Transpath is über das World Wide Web (http://transpath.gbf.de) verfügbar durch ein Servlet-basiertes Interface, das dynamische Sichten direkt aus dem Inhalt der Datenbank erzeugt. Signalpfad-Abfragen und verschiedene Arten der Visualisierung werden zusammen mit textbasierenden Abfragen und Detailinformationen zu einzelnen Eintraegen unterstuetzt. Es wird gezeigt dass die Datenbank zur Analyse des Signalnetzes von Nutzen ist und die Grundlage fuer Simulationen liefern kann.Transpath is an information system on signal-transduction networks. It focuses on pathways involved in the regulation of transcription factors. Molecules and reactions are the nodes in a signaling graph. They are stored in an object-oriented database, together with information about their location, quality, family relationships and signaling motifs. Also stored are links to other databases and references to the original literature. Transpath differentiates between the states of a signal molecule, and can adequately describe the reaction mechanisms of signaling interactions. It is available over the web (http://transpath.gbf.de) through a Servlet-based interface, that creates dynamic content directly from the contents of the database. Pathway query mechanisms and several kinds of display are provided for the database in addition to text-based queries and information on single entries. We show that the database is useful for analysis of the signaling network and can provide the basis for simulations

    ARCHITECTURE-BASED RELIABILITY ANALYSIS OF WEB SERVICES

    Get PDF
    In a Service Oriented Architecture (SOA), the hierarchical complexity of Web Services (WS) and their interactions with the underlying Application Server (AS) create new challenges in providing a realistic estimate of WS performance and reliability. The current approaches often treat the entire WS environment as a black-box. Thus, the sensitivity of the overall reliability and performance to the behavior of the underlying WS architectures and AS components are not well-understood. In other words, the current research on the architecture-based analysis of WSs is limited. This dissertation presents a novel methodology for modeling the reliability and performance of web services. WSs are treated as atomic entities but the AS is broken down into layers. More specifically, interactions of WSs with the underlying layers of an AS are investigated. One important feature of the research is investigating the impact of dynamic parameters that exist at the layers, such as configuration parameters. These parameters may have negative impact on WSs performance if they are not configured properly. WSs are developed in house and the AS considered is JBoss AS. An experimental environment is setup so that controlled service requests can be generated and important performance metrics can be recorded under various configurations of the AS. On the other hand, a simulation model is developed from the source code and run-time behavior of the existing WS and AS implementations. The model mimics the logical behavior of the WSs based on their communication with the AS layers. The simulation results are compared to the experimental results to ensure the correctness of the model. The architecture of the simulation model, which is based on Stochastic Petri Nets (SPN), is modularized in accordance to the layers and their interactions. As the web services are often executed in a complex and distributed environment, the modularized approach enables a user or a designer to observe and investigate the performance of the entire system under various conditions. In contrast, most approaches to WSs analyses are monolithic in that the entire system is treated as a closed box. The results show that 1) the simulation model can be a viable tool for measuring the performance and reliability of WSs under different loads and conditions that may be of great interest to WS designers and the professionals involved; 2) Configuration parameters have big impacts on the overall performance; 3) The simulation model can be tuned to account for various speeds in terms of communication, hardware, and software; 4) As the simulation model is modularized, it may be used as a foundation for aggregating the modules (layers), nullifying modules, or the model can be enhanced to include other aspects of the WS architecture such as network characteristics and the hardware/operating system on which the AS and WSs execute; and 5) The simulation model is beneficial to predict the performance of web services for those cases that are difficult to replicate in a field study

    Modélisation et simulation de processus de biologie moléculaire basée sur les réseaux de Pétri : une revue de littérature

    Get PDF
    Les réseaux de Pétri sont une technique de simulation à événements discrets développée pour la représentation de systèmes et plus particulièrement de leurs propriétés de concurrence et de synchronisation. Différentes extensions à la théorie initiale de cette méthode ont été utilisées pour la modélisation de processus de biologie moléculaire et de réseaux métaboliques. Il s’agit des extensions stochastiques, colorées, hybrides et fonctionnelles. Ce document fait une première revue des différentes approches qui ont été employées et des systèmes biologiques qui ont été modélisés grâce à celles-ci. De plus, le contexte d’application et les objectif s de modélisation de chacune sont discutés

    Availability modeling and evaluation on high performance cluster computing systems

    Get PDF
    Cluster computing has been attracting more and more attention from both the industrial and the academic world for its enormous computing power, cost effective, and scalability. Beowulf type cluster, for example, is a typical High Performance Computing (HPC) cluster system. Availability, as a key attribute of the system, needs to be considered at the system design stage and monitored at mission time. Moreover, system monitoring is a must to help identify the defects and ensure the system\u27s availability requirement. In this study, novel solutions which provide availability modeling, model evaluation, and data analysis as a single framework have been investigated. Three key components in the investigation are availability modeling, model evaluation, and data analysis. The general availability concepts and modeling techniques are briefly reviewed. The system\u27s availability model is divided into submodels based upon their functionalities. Furthermore, an object oriented Markov model specification to facilitate availability modeling and runtime configuration has been developed. Numerical solutions for Markov models are examined, especially on the uniformization method. Alternative implementations of the method are discussed; particularly on analyzing the cost of an alternative solution for small state space model, and different ways for solving large sparse Markov models. The dissertation also presents a monitoring and data analysis framework, which is responsible for failure analysis and availability reconfiguration. In addition, the event logs provided from the Lawrence Livermore National Laboratory have been studied and applied to validate the proposed techniques
    corecore