3,496 research outputs found

    Algorithms for Performance, Dependability, and Performability Evaluation using Stochastic Activity Networks

    Get PDF
    Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure

    Transient Analysis and Applications of Markov Reward Processes

    Get PDF
    In this thesis, the problem of computing the cumulative distribution function (cdf) of the random time required for a system to first reach a specified reward threshold when the rate at which the reward accrues is controlled by a continuous time stochastic process is considered. This random time is a type of first passage time for the cumulative reward process. The major contribution of this work is a simplified, analytical expression for the Laplace-Stieltjes Transform of the cdf in one dimension rather than two. The result is obtained using two techniques: i) by converting an existing partial differential equation to an ordinary differential equation with a known solution, and ii) by inverting an existing two-dimensional result with respect to one of the dimensions. The results are applied to a variety of real-world operational problems using one-dimensional numerical Laplace inversion techniques and compared to solutions obtained from numerical inversion of a two-dimensional transform, as well as those from Monte-Carlo simulation. Inverting one-dimensional transforms is computationally more expedient than inverting two-dimensional transforms, particularly as the number of states in the governing Markov process increases. The numerical results demonstrate the accuracy with which the one-dimensional result approximates the first passage time probabilities in a comparatively negligible amount of the time

    Two methods for computing bounds for the distribution of cumulative reward for large Markov models

    Get PDF
    Degradable fault-tolerant systems can be evaluated using rewarded continuous-time Markov chain (CTMC) models. In that context, a useful measure to consider is the distribution of the cumulative reward over a time interval [0, t]. All currently available numerical methods for computing that measure tend to be very expensive when the product of the maximum output rate of the CTMC model and t is large and, in that case, their application is limited to CTMC models of moderate size. In this paper, we develop two methods to compute bounds for the cumulative reward distribution of CTMC models with reward rates associated with states: BT/RT (Bounding Transformation/Regenerative Transformation) and BT/BRT (Bounding Transformation/ Bounding Regenerative Transformation). The methods require the selection of a regenerative state, are numerically stable and compute the bounds with well-controlled error. For a class of rewarded CTMC models, class C′′′_1 , and a particular, natural selection for the regenerative state the BT/BRT method allows to trade off bounds tightness with computational cost and will provide bounds at a moderate computational cost in many cases of interest. For a class of models, class C′′_1, slightly wider than class C′′′_1 , and a particular, natural selection for the regenerative state, the BT/RT method will yield tighter bounds at a higher computational cost. Under additional conditions, the bounds obtained by the less expensive version of BT/BRT and BT/RT seem to be tight for any value of t or not small values of t, depending on the initial probability distribution of the model. Class C′′_1 and class C′′′_1 models with those additional conditions include both exact and bounding typical failure/repair performability models of fault-tolerant systems with exponential failure and repair time distributions and repair in every state with failed components and a reward rate structure which is a non-increasing function of the collection of failed components. We illustrate both the applicability and the performance of the methods using a large CTMC performability example of a fault-tolerant multiprocessor system.Postprint (published version

    Maximum likelihood estimation of phase-type distributions

    Get PDF

    Bayesian learning of noisy Markov decision processes

    Full text link
    We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller

    A Generic Prognostic Framework for Remaining Useful Life Prediction of Complex Engineering Systems

    Get PDF
    Prognostics and Health Management (PHM) is a general term that encompasses methods used to evaluate system health, predict the onset of failure, and mitigate the risks associated with the degraded behavior. Multitudes of health monitoring techniques facilitating the detection and classification of the onset of failure have been developed for commercial and military applications. PHM system designers are currently focused on developing prognostic techniques and integrating diagnostic/prognostic approaches at the system level. This dissertation introduces a prognostic framework, which integrates several methodologies that are necessary for the general application of PHM to a variety of systems. A method is developed to represent the multidimensional system health status in the form of a scalar quantity called a health indicator. This method is able to indicate the effectiveness of the health indicator in terms of how well or how poorly the health indicator can distinguish healthy and faulty system exemplars. A usefulness criterion was developed which allows the practitioner to evaluate the practicability of using a particular prognostic model along with observed degradation evidence data. The criterion of usefulness is based on comparing the model uncertainty imposed primarily by imperfectness of degradation evidence data against the uncertainty associated with the time-to-failure prediction based on average reliability characteristics of the system. This dissertation identifies the major contributors to prognostic uncertainty and analyzes their effects. Further study of two important contributions resulted in the development of uncertainty management techniques to improve PHM performance. An analysis of uncertainty effects attributed to the random nature of the critical degradation threshold, , was performed. An analysis of uncertainty effects attributed to the presence of unobservable failure mechanisms affecting the system degradation process along with observable failure mechanisms was performed. A method was developed to reduce the effects of uncertainty on a prognostic model. This dissertation provides a method to incorporate prognostic information into optimization techniques aimed at finding an optimal control policy for equipment performing in an uncertain environment

    List of requirements on formalisms and selection of appropriate tools

    Get PDF
    This deliverable reports on the activities for the set-up of the modelling environments for the evaluation activities of WP5. To this objective, it reports on the identified modelling peculiarities of the electric power infrastructure and the information infrastructures and of their interdependencies, recalls the tools that have been considered and concentrates on the tools that are, and will be, used in the project: DrawNET, DEEM and EPSys which have been developed before and during the project by the partners, and M\uf6bius and PRISM, developed respectively at the University of Illinois at Urbana Champaign and at the University of Birmingham (and recently at the University of Oxford)
    • …
    corecore