3,534 research outputs found

    Techniques for the Fast Simulation of Models of Highly dependable Systems

    Get PDF
    With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system

    Robust measurement-based buffer overflow probability estimators for QoS provisioning and traffic anomaly prediction applicationm

    Get PDF
    Suitable estimators for a class of Large Deviation approximations of rare event probabilities based on sample realizations of random processes have been proposed in our earlier work. These estimators are expressed as non-linear multi-dimensional optimization problems of a special structure. In this paper, we develop an algorithm to solve these optimization problems very efficiently based on their characteristic structure. After discussing the nature of the objective function and constraint set and their peculiarities, we provide a formal proof that the developed algorithm is guaranteed to always converge. The existence of efficient and provably convergent algorithms for solving these problems is a prerequisite for using the proposed estimators in real time problems such as call admission control, adaptive modulation and coding with QoS constraints, and traffic anomaly detection in high data rate communication networks

    Robust measurement-based buffer overflow probability estimators for QoS provisioning and traffic anomaly prediction applications

    Get PDF
    Suitable estimators for a class of Large Deviation approximations of rare event probabilities based on sample realizations of random processes have been proposed in our earlier work. These estimators are expressed as non-linear multi-dimensional optimization problems of a special structure. In this paper, we develop an algorithm to solve these optimization problems very efficiently based on their characteristic structure. After discussing the nature of the objective function and constraint set and their peculiarities, we provide a formal proof that the developed algorithm is guaranteed to always converge. The existence of efficient and provably convergent algorithms for solving these problems is a prerequisite for using the proposed estimators in real time problems such as call admission control, adaptive modulation and coding with QoS constraints, and traffic anomaly detection in high data rate communication networks

    Transient Reward Approximation for Continuous-Time Markov Chains

    Full text link
    We are interested in the analysis of very large continuous-time Markov chains (CTMCs) with many distinct rates. Such models arise naturally in the context of reliability analysis, e.g., of computer network performability analysis, of power grids, of computer virus vulnerability, and in the study of crowd dynamics. We use abstraction techniques together with novel algorithms for the computation of bounds on the expected final and accumulated rewards in continuous-time Markov decision processes (CTMDPs). These ingredients are combined in a partly symbolic and partly explicit (symblicit) analysis approach. In particular, we circumvent the use of multi-terminal decision diagrams, because the latter do not work well if facing a large number of different rates. We demonstrate the practical applicability and efficiency of the approach on two case studies.Comment: Accepted for publication in IEEE Transactions on Reliabilit

    Two-dimensional fluid queues with temporary assistance

    Full text link
    We consider a two-dimensional stochastic fluid model with NN ON-OFF inputs and temporary assistance, which is an extension of the same model with N=1N = 1 in Mahabhashyam et al. (2008). The rates of change of both buffers are piecewise constant and dependent on the underlying Markovian phase of the model, and the rates of change for Buffer 2 are also dependent on the specific level of Buffer 1. This is because both buffers share a fixed output capacity, the precise proportion of which depends on Buffer 1. The generalization of the number of ON-OFF inputs necessitates modifications in the original rules of output-capacity sharing from Mahabhashyam et al. (2008) and considerably complicates both the theoretical analysis and the numerical computation of various performance measures

    Naïve Learning in Social Networks: Convergence, Influence and Wisdom of Crowds

    Get PDF
    We study learning and influence in a setting where agents communicate according to an arbitrary social network and naïvely update their beliefs by repeatedly taking weighted averages of their neighbors’ opinions. A focus is on conditions under which beliefs of all agents in large societies converge to the truth, despite their naïve updating. We show that this happens if and only if the influence of the most influential agent in the society is vanishing as the society grows. Using simple examples, we identify two main obstructions which can prevent this. By ruling out these obstructions, we provide general structural conditions on the social network that are sufficient for convergence to truth. In addition, we show how social influence changes when some agents redistribute their trust, and we provide a complete characterization of the social networks for which there is a convergence of beliefs. Finally, we survey some recent structural results on the speed of convergence and relate these to issues of segregation, polarization and propaganda.Social Networks, Learning, Diffusion, Bounded Rationality

    Scalable Performance Analysis of Massively Parallel Stochastic Systems

    No full text
    The accurate performance analysis of large-scale computer and communication systems is directly inhibited by an exponential growth in the state-space of the underlying Markovian performance model. This is particularly true when considering massively-parallel architectures such as cloud or grid computing infrastructures. Nevertheless, an ability to extract quantitative performance measures such as passage-time distributions from performance models of these systems is critical for providers of these services. Indeed, without such an ability, they remain unable to offer realistic end-to-end service level agreements (SLAs) which they can have any confidence of honouring. Additionally, this must be possible in a short enough period of time to allow many different parameter combinations in a complex system to be tested. If we can achieve this rapid performance analysis goal, it will enable service providers and engineers to determine the cost-optimal behaviour which satisfies the SLAs. In this thesis, we develop a scalable performance analysis framework for the grouped PEPA stochastic process algebra. Our approach is based on the approximation of key model quantities such as means and variances by tractable systems of ordinary differential equations (ODEs). Crucially, the size of these systems of ODEs is independent of the number of interacting entities within the model, making these analysis techniques extremely scalable. The reliability of our approach is directly supported by convergence results and, in some cases, explicit error bounds. We focus on extracting passage-time measures from performance models since these are very commonly the language in which a service level agreement is phrased. We design scalable analysis techniques which can handle passages defined both in terms of entire component populations as well as individual or tagged members of a large population. A precise and straightforward specification of a passage-time service level agreement is as important to the performance engineering process as its evaluation. This is especially true of large and complex models of industrial-scale systems. To address this, we introduce the unified stochastic probe framework. Unified stochastic probes are used to generate a model augmentation which exposes explicitly the SLA measure of interest to the analysis toolkit. In this thesis, we deploy these probes to define many detailed and derived performance measures that can be automatically and directly analysed using rapid ODE techniques. In this way, we tackle applicable problems at many levels of the performance engineering process: from specification and model representation to efficient and scalable analysis
    corecore