2,369 research outputs found

    A Method for the Combination of Stochastic Time Varying Load Effects

    Get PDF
    The problem of evaluating the probability that a structure becomes unsafe under a combination of loads, over a given time period, is addressed. The loads and load effects are modeled as either pulse (static problem) processes with random occurrence time, intensity and a specified shape or intermittent continuous (dynamic problem) processes which are zero mean Gaussian processes superimposed 'on a pulse process. The load coincidence method is extended to problems with both nonlinear limit states and dynamic responses, including the case of correlated dynamic responses. The technique of linearization of a nonlinear limit state commonly used in a time-invariant problem is investigated for timevarying combination problems, with emphasis on selecting the linearization point. Results are compared with other methods, namely the method based on upcrossing rate, simpler combination rules such as Square Root of Sum of Squares and Turkstra's rule. Correlated effects among dynamic loads are examined to see how results differ from correlated static loads and to demonstrate which types of load dependencies are most important, i.e., affect' the exceedance probabilities the most. Application of the load coincidence method to code development is briefly discussed.National Science Foundation Grants CME 79-18053 and CEE 82-0759

    Change-point Problem and Regression: An Annotated Bibliography

    Get PDF
    The problems of identifying changes at unknown times and of estimating the location of changes in stochastic processes are referred to as the change-point problem or, in the Eastern literature, as disorder . The change-point problem, first introduced in the quality control context, has since developed into a fundamental problem in the areas of statistical control theory, stationarity of a stochastic process, estimation of the current position of a time series, testing and estimation of change in the patterns of a regression model, and most recently in the comparison and matching of DNA sequences in microarray data analysis. Numerous methodological approaches have been implemented in examining change-point models. Maximum-likelihood estimation, Bayesian estimation, isotonic regression, piecewise regression, quasi-likelihood and non-parametric regression are among the methods which have been applied to resolving challenges in change-point problems. Grid-searching approaches have also been used to examine the change-point problem. Statistical analysis of change-point problems depends on the method of data collection. If the data collection is ongoing until some random time, then the appropriate statistical procedure is called sequential. If, however, a large finite set of data is collected with the purpose of determining if at least one change-point occurred, then this may be referred to as non-sequential. Not surprisingly, both the former and the latter have a rich literature with much of the earlier work focusing on sequential methods inspired by applications in quality control for industrial processes. In the regression literature, the change-point model is also referred to as two- or multiple-phase regression, switching regression, segmented regression, two-stage least squares (Shaban, 1980), or broken-line regression. The area of the change-point problem has been the subject of intensive research in the past half-century. The subject has evolved considerably and found applications in many different areas. It seems rather impossible to summarize all of the research carried out over the past 50 years on the change-point problem. We have therefore confined ourselves to those articles on change-point problems which pertain to regression. The important branch of sequential procedures in change-point problems has been left out entirely. We refer the readers to the seminal review papers by Lai (1995, 2001). The so called structural change models, which occupy a considerable portion of the research in the area of change-point, particularly among econometricians, have not been fully considered. We refer the reader to Perron (2005) for an updated review in this area. Articles on change-point in time series are considered only if the methodologies presented in the paper pertain to regression analysis

    A study of self-similar traffic generation for ATM networks

    Get PDF
    This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation

    Approximate performability and dependability analysis using generalized stochastic Petri Nets

    Get PDF
    Since current day fault-tolerant and distributed computer and communication systems tend to be large and complex, their corresponding performability models will suffer from the same characteristics. Therefore, calculating performability measures from these models is a difficult and time-consuming task.\ud \ud To alleviate the largeness and complexity problem to some extent we use generalized stochastic Petri nets to describe to models and to automatically generate the underlying Markov reward models. Still however, many models cannot be solved with the current numerical techniques, although they are conveniently and often compactly described.\ud \ud In this paper we discuss two heuristic state space truncation techniques that allow us to obtain very good approximations for the steady-state performability while only assessing a few percent of the states of the untruncated model. For a class of reversible models we derive explicit lower and upper bounds on the exact steady-state performability. For a much wider class of models a truncation theorem exists that allows one to obtain bounds for the error made in the truncation. We discuss this theorem in the context of approximate performability models and comment on its applicability. For all the proposed truncation techniques we present examples showing their usefulness

    Maintenance optimization for multi-component systems under condition monitoring

    Get PDF

    Performance and reliability modelling of computing systems using spectral expansion

    Get PDF
    PhD ThesisThis thesis is concerned with the analytical modelling of computing and other discrete event systems, for steady state performance and dependability. That is carried out using a novel solution technique, known as the spectral expansion method. The type of problems considered, and the systems analysed, are represented by certain two-dimensional Markov-processes on finite or semi-infinite lattice strips. A sub set of these Markov processes are the Quasi-Birth-and-Death processes. These models are important because they have wide ranging applications in the design and analysis of modern communications, advanced computing systems, flexible manufacturing systems and in dependability modelling. Though the matrixgeometric method is the presently most popular method, in this area, it suffers from certain drawbacks, as illustrated in one of the chapters. Spectral expansion clearly rises above those limitations. This also, is shown with the aid of examples. The contributions of this thesis can be divided into two categories. They are, • The theoretical foundation of the spectral expansion method is laid. Stability analysis of these Markov processes is carried out. Efficient numerical solution algorithms are developed. A comparative study is performed to show that the spectral expansion algorithm has an edge over the matrix-geometric method, in computational efficiency, accuracy and ease of use. • The method is applied to several non-trivial and complicated modelling problems, occuring in computer and communication systems. Performance measures are evaluated and optimisation issues are addressed

    Semiparametric estimate of the efficiency of imperfect maintenance actions for a gamma deteriorating system

    Get PDF
    International audienceA system is considered, which is deteriorating over time according to a non homogeneous gamma process with unknown parameters. The system is subject to periodic and instantaneous imperfect maintenance actions (repairs). Each imperfect repair removes a proportion ρ of the accumulated degradation since the previous repair. The parameter ρ hence appears as a measure for the maintenance efficiency. This model is called arithmetic reduction of degradation of order 1. The system is inspected right before each maintenance action, thus providing some multivariate measurement of the successively observed deterioration levels. Based on these data, a semiparametric estimator of ρ is proposed, considering the parameters of the underlying gamma process as nuisance parameters. This estimator is mainly based on the range of admissible ρ's, which depends on the data. Under technical assumptions, consistency results are obtained, with surprisingly high convergence rates (up to exponential). The case where several i.i.d. systems are observed is next envisioned. Consistency results are obtained for the efficiency estimator, as the number of systems tends to infinity, with a convergence rate that can be higher or lower than the classical square root rate. Finally, the performances of the estimators are illustrated on a few numerical examples

    Product forms for availability

    Get PDF
    This paper shows and illustrates that product form expressions for the steady state distribution, as known for queueing networks, can also be extended to a class of availability models. This class allows breakdown and repair rates from one component to depend on the status of other components. Common resource capacities and repair priorities, for example, are included. Conditions for the models to have a product form are stated explicitly. This product form is shown to be insensitive to the distributions of the underlying random variables, i.e. to depend only on their means. Further it is briefly indicated how queueing for repair can be incorporated. Novel product form examples are presented of a simple series/parallel configuration, a fault tolerant database system and a multi-stage interconnection network
    corecore