171 research outputs found

    Scalable Performance Analysis of Massively Parallel Stochastic Systems

    No full text
    The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis

    Continuization of Timed Petri Nets: From Performance Evaluation to Observation and Control

    Full text link
    Abstract. State explosion is a fundamental problem in the analysis and synthesis of discrete event systems. Continuous Petri nets can be seen as a relaxation of discrete models allowing more efficient (in some cases polynomial time) analysis and synthesis algorithms. Nevertheless computational costs can be reduced at the expense of the analyzability of some properties. Even more, some net systems do not allow any kind of continuization. The present work first considers these aspects and some of the alternative formalisms usable for continuous relaxations of discrete systems. Particular emphasis is done later on the presentation of some results concerning performance evaluation, parametric design and marking (i.e., state) observation and control. Even if a significant amount of results are available today for continuous net systems, many essential issues are still not solved. A list of some of these are given in the introduction as an invitation to work on them.

    Methodologies synthesis

    Get PDF
    This deliverable deals with the modelling and analysis of interdependencies between critical infrastructures, focussing attention on two interdependent infrastructures studied in the context of CRUTIAL: the electric power infrastructure and the information infrastructures supporting management, control and maintenance functionality. The main objectives are: 1) investigate the main challenges to be addressed for the analysis and modelling of interdependencies, 2) review the modelling methodologies and tools that can be used to address these challenges and support the evaluation of the impact of interdependencies on the dependability and resilience of the service delivered to the users, and 3) present the preliminary directions investigated so far by the CRUTIAL consortium for describing and modelling interdependencies

    On Minimum-time Control of Continuous Petri nets: Centralized and Decentralized Perspectives

    Get PDF
    Muchos sistemas artificiales, como los sistemas de manufactura, de logística, de telecomunicaciones o de tráfico, pueden ser vistos "de manera natural" como Sistemas Dinámicos de Eventos Discretos (DEDS). Desafortunadamente, cuando tienen grandes poblaciones, estos sistemas pueden sufrir del clásico problema de la explosión de estados. Con la intención de evitar este problema, se pueden aplicar técnicas de fluidificación, obteniendo una relajación fluida del modelo original discreto. Las redes de Petri continuas (CPNs) son una aproximación fluida de las redes de Petri discretas, un conocido formalismo para los DEDS. Una ventaja clave del empleo de las CPNs es que, a menudo, llevan a una substancial reducción del coste computacional. Esta tesis se centra en el control de Redes de Petri continuas temporizadas (TCPNs), donde las transiciones tienen una interpretación temporal asociada. Se asume que los sistemas siguen una semántica de servidores infinitos (velocidad variable) y que las acciones de control aplicables son la disminución de la velocidad del disparo de las transiciones. Se consideran dos interesantes problemas de control en esta tesis: 1) control del marcado objetivo, donde el objetivo es conducir el sistema (tan rápido como sea posible) desde un estado inicial a un estado final deseado, y es similar al problema de control set-point para cualquier sistema de estado continuo; 2) control del flujo óptimo, donde el objetivo es conducir el sistema a un flujo óptimo sin conocimiento a priori del estado final. En particular, estamos interesados en alcanzar el flujo máximo tan rápido como sea posible, lo cual suele ser deseable en la mayoría de sistemas prácticos. El problema de control del marcado objetivo se considera desde las perspectivas centralizada y descentralizada. Proponemos varios controladores centralizados en tiempo mínimo, y todos ellos están basados en una estrategia ON/OFF. Para algunas subclases, como las redes Choice-Free (CF), se garantiza la evolución en tiempo mínimo; mientras que para redes generales, los controladores propuestos son heurísticos. Respecto del problema de control descentralizado, proponemos en primer lugar un controlador descentralizado en tiempo mínimo para redes CF. Para redes generales, proponemos una aproximación distribuida del método Model Predictive Control (MPC); sin embargo en este método no se considera evolución en tiempo mínimo. El problema de control de flujo óptimo (en nuestro caso, flujo máximo) en tiempo mínimo se considera para redes CF. Proponemos un algoritmo heurístico en el que calculamos los "mejores" firing count vectors que llevan al sistema al flujo máximo, y aplicamos una estrategia de disparo ON/OFF. También demostramos que, debido a que las redes CF son persistentes, podemos reducir el tiempo que tarda en alcanzar el flujo máximo con algunos disparos adicionales. Los métodos de control propuestos se han implementado e integrado en una herramienta para Redes de Petri híbridas basada en Matlab, llamada SimHPN

    Extended Abstracts: PMCCS3: Third International Workshop on Performability Modeling of Computer and Communication Systems

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryThe pages of the front matter that are missing from the PDF were blank

    High volume conveyor sortation system analysis

    Get PDF
    The design and operation of a high volume conveyor sortation system are important due to its high cost, large footprint and critical role in the system. In this thesis, we study the characteristics of the conveyor sortation system from performance evaluation and design perspectives employing continuous modeling approaches. We present two continuous conveyor models (Delay and Stock Model and Batch on Conveyor Model) with different representation accuracy in a unified mathematical framework. Based on the Batch on Conveyor Model, we develop a fast fluid simulation methodology. We address the feasibility of implementing fluid simulation from modeling capabilities, algorithm design and simulation performance in terms of accuracy and simulation time. From a design perspective, we focus on rates determination and accumulation design in the accumulation and merge subsystem. The optimization problem is to find a minimum cost design that satisfies some predefined performance requirements under stochastic conditions. We first transform this stochastic programming problem into a deterministic nonlinear programming problem through sample path based optimization method. A gradient based method is adopted to solve the deterministic problem. Since there is no closed form for performance metric even for a deterministic input stream, we adopt continuous modeling to develop deterministic performance evaluation models and conduct sensitivity analysis on these models. We explore the prospects of using the two continuous conveyor models we presented.Ph.D.Committee Chair: Chen Zhou; Committee Member: Gunter Sharp; Committee Member: Leon F. McGinnis; Committee Member: Spiridon Reveliotis; Committee Member: Yorai Ward

    Quantitative Analysis of Apache Storm Applications: The NewsAsset Case Study

    Get PDF
    The development of Information Systems today faces the era of Big Data. Large volumes of information need to be processed in real-time, for example, for Facebook or Twitter analysis. This paper addresses the redesign of NewsAsset, a commercial product that helps journalists by providing services, which analyzes millions of media items from the social network in real-time. Technologies like Apache Storm can help enormously in this context. We have quantitatively analyzed the new design of NewsAsset to assess whether the introduction of Apache Storm can meet the demanding performance requirements of this media product. Our assessment approach, guided by the Unified Modeling Language (UML), takes advantage, for performance analysis, of the software designs already used for development. In addition, we converted UML into a domain-specific modeling language (DSML) for Apache Storm, thus creating a profile for Storm. Later, we transformed said DSML into an appropriate language for performance evaluation, specifically, stochastic Petri nets. The assessment ended with a successful software design that certainly met the scalability requirements of NewsAsset

    Scalable analysis of stochastic process algebra models

    Get PDF
    The performance modelling of large-scale systems using discrete-state approaches is fundamentally hampered by the well-known problem of state-space explosion, which causes exponential growth of the reachable state space as a function of the number of the components which constitute the model. Because they are mapped onto continuous-time Markov chains (CTMCs), models described in the stochastic process algebra PEPA are no exception. This thesis presents a deterministic continuous-state semantics of PEPA which employs ordinary differential equations (ODEs) as the underlying mathematics for the performance evaluation. This is suitable for models consisting of large numbers of replicated components, as the ODE problem size is insensitive to the actual population levels of the system under study. Furthermore, the ODE is given an interpretation as the fluid limit of a properly defined CTMC model when the initial population levels go to infinity. This framework allows the use of existing results which give error bounds to assess the quality of the differential approximation. The computation of performance indices such as throughput, utilisation, and average response time are interpreted deterministically as functions of the ODE solution and are related to corresponding reward structures in the Markovian setting. The differential interpretation of PEPA provides a framework that is conceptually analogous to established approximation methods in queueing networks based on meanvalue analysis, as both approaches aim at reducing the computational cost of the analysis by providing estimates for the expected values of the performance metrics of interest. The relationship between these two techniques is examined in more detail in a comparison between PEPA and the Layered Queueing Network (LQN) model. General patterns of translation of LQN elements into corresponding PEPA components are applied to a substantial case study of a distributed computer system. This model is analysed using stochastic simulation to gauge the soundness of the translation. Furthermore, it is subjected to a series of numerical tests to compare execution runtimes and accuracy of the PEPA differential analysis against the LQN mean-value approximation method. Finally, this thesis discusses the major elements concerning the development of a software toolkit, the PEPA Eclipse Plug-in, which offers a comprehensive modelling environment for PEPA, including modules for static analysis, explicit state-space exploration, numerical solution of the steady-state equilibrium of the Markov chain, stochastic simulation, the differential analysis approach herein presented, and a graphical framework for model editing and visualisation of performance evaluation results

    A fluid analysis framework for a Markovian process algebra

    Get PDF
    Markovian process algebras, such as PEPA and stochastic π-calculus, bring a powerful compositional approach to the performance modelling of complex systems. However, the models generated by process algebras, as with other interleaving formalisms, are susceptible to the state space explosion problem. Models with only a modest number of process algebra terms can easily generate so many states that they are all but intractable to traditional solution techniques. Previous work aimed at addressing this problem has presented a fluid-flow approximation allowing the analysis of systems which would otherwise be inaccessible. To achieve this, systems of ordinary differential equations describing the fluid flow of the stochastic process algebra model are generated informally. In this paper, we show formally that for a large class of models, this fluid-flow analysis can be directly derived from the stochastic process algebra model as an approximation to the mean number of component types within the model. The nature of the fluid approximation is derived and characterised by direct comparison with the Chapman–Kolmogorov equations underlying the Markov model. Furthermore, we compare the fluid approximation with the exact solution using stochastic simulation and we are able to demonstrate that it is a very accurate approximation in many cases. For the first time, we also show how to extend these techniques naturally to generate systems of differential equations approximating higher order moments of model component counts. These are important performance characteristics for estimating, for instance, the variance of the component counts. This is very necessary if we are to understand how precise the fluid-flow calculation is, in a given modelling situation

    Queues with Congestion-dependent Feedback

    Get PDF
    This dissertation expands the theory of feedback queueing systems and applies a number of these models to a performance analysis of the Transmission Control Protocol, a flow control protocol commonly used in the Internet
    • …
    corecore