506 research outputs found

    Efficient Parallel Statistical Model Checking of Biochemical Networks

    Full text link
    We consider the problem of verifying stochastic models of biochemical networks against behavioral properties expressed in temporal logic terms. Exact probabilistic verification approaches such as, for example, CSL/PCTL model checking, are undermined by a huge computational demand which rule them out for most real case studies. Less demanding approaches, such as statistical model checking, estimate the likelihood that a property is satisfied by sampling executions out of the stochastic model. We propose a methodology for efficiently estimating the likelihood that a LTL property P holds of a stochastic model of a biochemical network. As with other statistical verification techniques, the methodology we propose uses a stochastic simulation algorithm for generating execution samples, however there are three key aspects that improve the efficiency: first, the sample generation is driven by on-the-fly verification of P which results in optimal overall simulation time. Second, the confidence interval estimation for the probability of P to hold is based on an efficient variant of the Wilson method which ensures a faster convergence. Third, the whole methodology is designed according to a parallel fashion and a prototype software tool has been implemented that performs the sampling/verification process in parallel over an HPC architecture

    WS-Pro: a Petri net based performance-driven service composition framework

    Get PDF
    As an emerging area gaining prevalence in the industry, Web Services was established to satisfy the needs for better flexibility and higher reliability in web applications. However, due to the lack of reliable frameworks and difficulties in constructing versatile service composition platform, web developers encountered major obstacles in large-scale deployment of web services. Meanwhile, performance has been one of the major concerns and a largely unexplored area in Web Services research. There is high demand for researchers to conceive and develop feasible solutions to design, monitor, and deploy web service systems that can adapt to failures, especially performance failures. Though many techniques have been proposed to solve this problem, none of them offers a comprehensive solution to overcome the difficulties that challenge practitioners. Central to the performance-engineering studies, performance analysis and performance adaptation are of paramount importance to the success of a software project. The industry learned through many hard lessons the significance of well-founded and well-executed performance engineering plans. An important fact is that it is too expensive to tackle performance evaluation, mostly through performance testing, after the software is developed. This is especially true in recent decades when software complexity has risen sharply. After the system is deployed, performance adaptation is essential to maintaining and improving software system reliability. Performance adaptation provides techniques to mitigate the consequence of performance failures and therefore is an important research issue. Performance adaptation is particularly meaningful for mission-critical software systems and software systems with inevitable frequent performance failures, such as Web Services. This dissertation focuses on Web Services framework and proposes a performance-driven service composition scheme, called WS-Pro, to support both performance analysis and performance adaptation. A formalism of transformation from WS-BPEL to Petri net is first defined to enable the analysis of system properties and facilitate quality prediction. A state-transition based proof is presented to show that the transformed Petri net model correctly simulates the behavior of the WS-BPEL process. The generated Petri net model was augmented using performance data supplied by both historical data and runtime data. Results of executing the Petri nets suggest that optimal composition plans can be achieved based on the proposed method. The performance of service composition procedure is an important research issue which has not been sufficiently treated by researchers. However, such an issue is critical for dynamic service composition, where re-planning must be done in a timely manner. In order to improve the performance of service composition procedure and enhance performance adaptation, this dissertation presents an algorithm to remove loops in the reachability graphs so that a large portion of the computation time of service composition can be moved to a pre-processing unit; hence the response time is shortened during runtime. We also extended the WS-Pro to the ubiquitous computing area to improve fault-tolerance

    Hybrid Multiresolution Simulation & Model Checking: Network-On-Chip Systems

    Get PDF
    abstract: Designers employ a variety of modeling theories and methodologies to create functional models of discrete network systems. These dynamical models are evaluated using verification and validation techniques throughout incremental design stages. Models created for these systems should directly represent their growing complexity with respect to composition and heterogeneity. Similar to software engineering practices, incremental model design is required for complex system design. As a result, models at early increments are significantly simpler relative to real systems. While experimenting (verification or validation) on models at early increments are computationally less demanding, the results of these experiments are less trustworthy and less rewarding. At any increment of design, a set of tools and technique are required for controlling the complexity of models and experimentation. A complex system such as Network-on-Chip (NoC) may benefit from incremental design stages. Current design methods for NoC rely on multiple models developed using various modeling frameworks. It is useful to develop frameworks that can formalize the relationships among these models. Fine-grain models are derived using their coarse-grain counterparts. Moreover, validation and verification capability at various design stages enabled through disciplined model conversion is very beneficial. In this research, Multiresolution Modeling (MRM) is used for system level design of NoC. MRM aids in creating a family of models at different levels of scale and complexity with well-formed relationships. In addition, a variant of the Discrete Event System Specification (DEVS) formalism is proposed which supports model checking. Hierarchical models of Network-on-Chip components may be created at different resolutions while each model can be validated using discrete-event simulation and verified via state exploration. System property expressions are defined in the DEVS language and developed as Transducers which can be applied seamlessly for model checking and simulation purposes. Multiresolution Modeling with verification and validation capabilities of this framework complement one another. MRM manages the scale and complexity of models which in turn can reduces V&V time and effort and conversely the V&V helps ensure correctness of models at multiple resolutions. This framework is realized through extending the DEVS-Suite simulator and its applicability demonstrated for exemplar NoC models.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Efficient Model Checking: The Power of Randomness

    Get PDF
    corecore