149 research outputs found

    Computational applications in stochastic operations research

    Get PDF
    Several computational applications in stochastic operations research are presented, where, for each application, a computational engine is used to achieve results that are otherwise overly tedious by hand calculations, or in some cases mathematically intractable. Algorithms and code are developed and implemented with specific emphasis placed on achieving exact results and substantiated via Monte Carlo simulation. The code for each application is provided in the software language utilized and algorithms are available for coding in another environment. The topics include univariate and bivariate nonparametric random variate generation using a piecewise-linear cumulative distribution, deriving exact statistical process control chart constants for non-normal sampling, testing probability distribution conformance to Benford\u27s law, and transient analysis of M/M/s queueing systems. The nonparametric random variate generation chapters provide the modeler with a method of generating univariate and bivariate samples when only observed data is available. The method is completely nonparametric and is capable of mimicking multimodal joint distributions. The algorithm is black-box, where no decisions are required from the modeler in generating variates for simulation. The statistical process control chart constant chapter develops constants for select non-normal distributions, and provides tabulated results for researchers who have identified a given process as non-normal The constants derived are bias correction factors for the sample range and sample standard deviation. The Benford conformance testing chapter offers the Kolmogorov-Smirnov test as an alternative to the standard chi-square goodness-of-fit test when testing whether leading digits of a data set are distributed according to Benford\u27s law. The alternative test has the advantage of being an exact test for all sample sizes, removing the usual sample size restriction involved with the chi-square goodness-of-fit test. The transient queueing analysis chapter develops and automates the construction of the sojourn time distribution for the nth customer in an M/M/s queue with k customers initially present at time 0 (k ≥ 0) without the usual limit on traffic intensity, rho \u3c 1, providing an avenue to conduct transient analysis on various measures of performance for a given initial number of customers in the system. It also develops and automates the construction of the sojourn time joint probability distribution function for pairs of customers, allowing the calculation of the exact covariance between customer sojourn times

    Job-shop scheduling with approximate methods

    Get PDF
    Imperial Users onl

    actuar: An R Package for Actuarial Science

    Get PDF
    actuar is a package providing additional Actuarial Science functionality to the R statistical system. The project was launched in 2005 and the package is available on the Comprehensive R Archive Network since February 2006. The current version of the package contains functions for use in the fields of loss distributions modeling, risk theory (including ruin theory), simulation of compound hierarchical models and credibility theory. This paper presents in detail but with few technical terms the most recent version of the package

    Threshold Regression for Survival Analysis: Modeling Event Times by a Stochastic Process Reaching a Boundary

    Full text link
    Many researchers have investigated first hitting times as models for survival data. First hitting times arise naturally in many types of stochastic processes, ranging from Wiener processes to Markov chains. In a survival context, the state of the underlying process represents the strength of an item or the health of an individual. The item fails or the individual experiences a clinical endpoint when the process reaches an adverse threshold state for the first time. The time scale can be calendar time or some other operational measure of degradation or disease progression. In many applications, the process is latent (i.e., unobservable). Threshold regression refers to first-hitting-time models with regression structures that accommodate covariate data. The parameters of the process, threshold state and time scale may depend on the covariates. This paper reviews aspects of this topic and discusses fruitful avenues for future research.Comment: Published at http://dx.doi.org/10.1214/088342306000000330 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Statistical characterization for 3-D two-component tissue models using an extended microbeam technique

    Get PDF
    None provided

    Design and Analysis of Monte Carlo Experiments

    Get PDF
    monte carlo experiments;simulation models;mathematical analysis;sensitivity analysis;experimental design

    Investigation, modelling and planning of stochastic concrete placing operations

    Get PDF

    An analysis of connected replenishment operational data: the distribution of CONREP service time

    Get PDF
    http://www.archive.org/details/analysisofconnec00beseLieutenant Commander, United States NavyApproved for public release; distribution is unlimited

    Knowledge Discovery from Complex Event Time Data with Covariates

    Get PDF
    In particular engineering applications, such as reliability engineering, complex types of data are encountered which require novel methods of statistical analysis. Handling covariates properly while managing the missing values is a challenging task. These type of issues happen frequently in reliability data analysis. Specifically, accelerated life testing (ALT) data are usually conducted by exposing test units of a product to severer-than-normal conditions to expedite the failure process. The resulting lifetime and/or censoring data are often modeled by a probability distribution along with a life-stress relationship. However, if the probability distribution and life-stress relationship selected cannot adequately describe the underlying failure process, the resulting reliability prediction will be misleading. To seek new mathematical and statistical tools to facilitate the modeling of such data, a critical question to be asked is: Can we find a family of versatile probability distributions along with a general life-stress relationship to model complex lifetime data with covariates? In this dissertation, a more general method is proposed for modeling lifetime data with covariates. Reliability estimation based on complete failure-time data or failure-time data with certain types of censoring has been extensively studied in statistics and engineering. However, the actual failure times of individual components are usually unavailable in many applications. Instead, only aggregate failure-time data are collected by actual users due to technical and/or economic reasons. When dealing with such data for reliability estimation, practitioners often face challenges of selecting the underlying failure-time distributions and the corresponding statistical inference methods. So far, only the Exponential, Normal, Gamma and Inverse Gaussian (IG) distributions have been used in analyzing aggregate failure-time data because these distributions have closed-form expressions for such data. However, the limited choices of probability distributions cannot satisfy extensive needs in a variety of engineering applications. Phase-type (PH) distributions are robust and flexible in modeling failure-time data as they can mimic a large collection of probability distributions of nonnegative random variables arbitrarily closely by adjusting the model structures. In this paper, PH distributions are utilized, for the first time, in reliability estimation based on aggregate failure-time data. To this end, a maximum likelihood estimation (MLE) method and a Bayesian alternative are developed. For the MLE method, an expectation-maximization (EM) algorithm is developed to estimate the model parameters, and the corresponding Fisher information is used to construct the confidence intervals for the quantities of interest. For the Bayesian method, a procedure for performing point and interval estimation is also introduced. Several numerical examples show that the proposed PH-based reliability estimation methods are quite flexible and alleviate the burden of selecting a probability distribution when the underlying failure-time distribution is general or even unknown
    • …
    corecore