7,668 research outputs found

    Practical Design of Generalized Likelihood Ratio Control Charts for Autocorrelated Data

    Get PDF
    Control charts based on Generalized Likelihood Ratio (GLR) tests are attractive from both a theoretical and practical point of view. In particular, in the case of an autocorrelated process, the GLR test uses the information contained in the time-varying response after a change and, as shown by Apley and Shi, is able to outperfom traditional control charts applied to residuals. In addition, a GLR chart provides estimates of the magnitude and the time of occurrence of the change. In this paper, we present a practical approach to the implementation of GLR charts for monitoring an autoregressive and moving average process assuming that only a Phase I sample is available. The proposed approach, based on automatic time series identification, estimates the GLR control limits via stochastic approximation using bootstrap resampling. Thus, it is able to take into account the uncertainty about the underlying model. A Monte Carlo study shows that our methodology can be used to design in a semi-automatic fashion a GLR chart with a prescribed rate of false alarms when as few as 50 Phase I observations are available. A real example is used to illustrate the designing procedure

    Critical Fault-Detecting Time Evaluation in Software with Discrete Compound Poisson Models

    Get PDF
    Software developers predict their product’s failure rate using reliability growth models that are typically based on nonhomogeneous Poisson (NHP) processes. In this article, we extend that practice to a nonhomogeneous discrete-compound Poisson process that allows for multiple faults of a system at the same time point. Along with traditional reliability metrics such as average number of failures in a time interval, we propose an alternative reliability index called critical fault-detecting time in order to provide more information for software managers making software quality evaluation and critical market policy decisions. We illustrate the significant potential for improved analysis using wireless failure data as well as simulated data

    Evaluating the Differences of Gridding Techniques for Digital Elevation Models Generation and Their Influence on the Modeling of Stony Debris Flows Routing: A Case Study From Rovina di Cancia Basin (North-Eastern Italian Alps)

    Get PDF
    Debris \ufb02ows are among the most hazardous phenomena in mountain areas. To cope with debris \ufb02ow hazard, it is common to delineate the risk-prone areas through routing models. The most important input to debris \ufb02ow routing models are the topographic data, usually in the form of Digital Elevation Models (DEMs). The quality of DEMs depends on the accuracy, density, and spatial distribution of the sampled points; on the characteristics of the surface; and on the applied gridding methodology. Therefore, the choice of the interpolation method affects the realistic representation of the channel and fan morphology, and thus potentially the debris \ufb02ow routing modeling outcomes. In this paper, we initially investigate the performance of common interpolation methods (i.e., linear triangulation, natural neighbor, nearest neighbor, Inverse Distance to a Power, ANUDEM, Radial Basis Functions, and ordinary kriging) in building DEMs with the complex topography of a debris \ufb02ow channel located in the Venetian Dolomites (North-eastern Italian Alps), by using small footprint full- waveform Light Detection And Ranging (LiDAR) data. The investigation is carried out through a combination of statistical analysis of vertical accuracy, algorithm robustness, and spatial clustering of vertical errors, and multi-criteria shape reliability assessment. After that, we examine the in\ufb02uence of the tested interpolation algorithms on the performance of a Geographic Information System (GIS)-based cell model for simulating stony debris \ufb02ows routing. In detail, we investigate both the correlation between the DEMs heights uncertainty resulting from the gridding procedure and that on the corresponding simulated erosion/deposition depths, both the effect of interpolation algorithms on simulated areas, erosion and deposition volumes, solid-liquid discharges, and channel morphology after the event. The comparison among the tested interpolation methods highlights that the ANUDEM and ordinary kriging algorithms are not suitable for building DEMs with complex topography. Conversely, the linear triangulation, the natural neighbor algorithm, and the thin-plate spline plus tension and completely regularized spline functions ensure the best trade-off among accuracy and shape reliability. Anyway, the evaluation of the effects of gridding techniques on debris \ufb02ow routing modeling reveals that the choice of the interpolation algorithm does not signi\ufb01cantly affect the model outcomes

    Performance evaluation of conventional exponentially weighted moving average (EWMA) and p-value cumulative sum (CUSUM) control chart

    Get PDF
    This paper is aimed at comparing the performances of the conventional Exponentially Weighted Moving Average (EWMA) and p-value Cumulative Sum (CUSUM) control chart. These charts were applied in monitoring the outbreak of pulmonary tuberculosis in Delta State University Teaching Hospital (DELSUTH), Oghara for a period of eighty four (84) calendar months. Line chart and histogram were plotted to test for stationary and normality of the data. Autocorrelation plot was also used to study the randomness of the data. The results of the control charts show that conventional EWMA chart detects shifts faster in monitoring process mean than the p-value CUSUM control chart. Keywords and Phrases: Exponentially Weighted Moving Average (EWMA), p-value, Cumulative Sum (CUSUM), Autocorrelation, Randomnes

    Vol. 13, No. 2 (Full Issue)

    Get PDF

    Assessing the Impact of Game Day Schedule and Opponents on Travel Patterns and Route Choice using Big Data Analytics

    Get PDF
    The transportation system is crucial for transferring people and goods from point A to point B. However, its reliability can be decreased by unanticipated congestion resulting from planned special events. For example, sporting events collect large crowds of people at specific venues on game days and disrupt normal traffic patterns. The goal of this study was to understand issues related to road traffic management during major sporting events by using widely available INRIX data to compare travel patterns and behaviors on game days against those on normal days. A comprehensive analysis was conducted on the impact of all Nebraska Cornhuskers football games over five years on traffic congestion on five major routes in Nebraska. We attempted to identify hotspots, the unusually high-risk zones in a spatiotemporal space containing traffic congestion that occur on almost all game days. For hotspot detection, we utilized a method called Multi-EigenSpot, which is able to detect multiple hotspots in a spatiotemporal space. With this algorithm, we were able to detect traffic hotspot clusters on the five chosen routes in Nebraska. After detecting the hotspots, we identified the factors affecting the sizes of hotspots and other parameters. The start time of the game and the Cornhuskers’ opponent for a given game are two important factors affecting the number of people coming to Lincoln, Nebraska, on game days. Finally, the Dynamic Bayesian Networks (DBN) approach was applied to forecast the start times and locations of hotspot clusters in 2018 with a weighted mean absolute percentage error (WMAPE) of 13.8%
    • …
    corecore