818 research outputs found

    Computational applications in stochastic operations research

    Get PDF
    Several computational applications in stochastic operations research are presented, where, for each application, a computational engine is used to achieve results that are otherwise overly tedious by hand calculations, or in some cases mathematically intractable. Algorithms and code are developed and implemented with specific emphasis placed on achieving exact results and substantiated via Monte Carlo simulation. The code for each application is provided in the software language utilized and algorithms are available for coding in another environment. The topics include univariate and bivariate nonparametric random variate generation using a piecewise-linear cumulative distribution, deriving exact statistical process control chart constants for non-normal sampling, testing probability distribution conformance to Benford\u27s law, and transient analysis of M/M/s queueing systems. The nonparametric random variate generation chapters provide the modeler with a method of generating univariate and bivariate samples when only observed data is available. The method is completely nonparametric and is capable of mimicking multimodal joint distributions. The algorithm is black-box, where no decisions are required from the modeler in generating variates for simulation. The statistical process control chart constant chapter develops constants for select non-normal distributions, and provides tabulated results for researchers who have identified a given process as non-normal The constants derived are bias correction factors for the sample range and sample standard deviation. The Benford conformance testing chapter offers the Kolmogorov-Smirnov test as an alternative to the standard chi-square goodness-of-fit test when testing whether leading digits of a data set are distributed according to Benford\u27s law. The alternative test has the advantage of being an exact test for all sample sizes, removing the usual sample size restriction involved with the chi-square goodness-of-fit test. The transient queueing analysis chapter develops and automates the construction of the sojourn time distribution for the nth customer in an M/M/s queue with k customers initially present at time 0 (k ≥ 0) without the usual limit on traffic intensity, rho \u3c 1, providing an avenue to conduct transient analysis on various measures of performance for a given initial number of customers in the system. It also develops and automates the construction of the sojourn time joint probability distribution function for pairs of customers, allowing the calculation of the exact covariance between customer sojourn times

    Nonparametric Estimation of the Cumulative Intensity Function for a Nonhomogeneous Poisson Process from Overlapping Realizations

    Get PDF
    A nonparametric technique for estimating the cumulative intensity function of a nonhomogeneous Poisson process from one or more realizations on an interval is extended here to include realizations that overlap. This technique does not require any arbitrary parameters from the modeler, and the estimated cumulative intensity function can be used to generate a point process for simulation by inversion

    Computationally Efficient Nonparametric Importance Sampling

    Full text link
    The variance reduction established by importance sampling strongly depends on the choice of the importance sampling distribution. A good choice is often hard to achieve especially for high-dimensional integration problems. Nonparametric estimation of the optimal importance sampling distribution (known as nonparametric importance sampling) is a reasonable alternative to parametric approaches.In this article nonparametric variants of both the self-normalized and the unnormalized importance sampling estimator are proposed and investigated. A common critique on nonparametric importance sampling is the increased computational burden compared to parametric methods. We solve this problem to a large degree by utilizing the linear blend frequency polygon estimator instead of a kernel estimator. Mean square error convergence properties are investigated leading to recommendations for the efficient application of nonparametric importance sampling. Particularly, we show that nonparametric importance sampling asymptotically attains optimal importance sampling variance. The efficiency of nonparametric importance sampling algorithms heavily relies on the computational efficiency of the employed nonparametric estimator. The linear blend frequency polygon outperforms kernel estimators in terms of certain criteria such as efficient sampling and evaluation. Furthermore, it is compatible with the inversion method for sample generation. This allows to combine our algorithms with other variance reduction techniques such as stratified sampling. Empirical evidence for the usefulness of the suggested algorithms is obtained by means of three benchmark integration problems. As an application we estimate the distribution of the queue length of a spam filter queueing system based on real data.Comment: 29 pages, 7 figure

    Models beyond the Dirichlet process

    Get PDF
    Bayesian nonparametric inference is a relatively young area of research and it has recently undergone a strong development. Most of its success can be explained by the considerable degree of exibility it ensures in statistical modelling, if compared to parametric alternatives, and by the emergence of new and ecient simulation techniques that make nonparametric models amenable to concrete use in a number of applied statistical problems. Since its introduction in 1973 by T.S. Ferguson, the Dirichlet process has emerged as a cornerstone in Bayesian nonparametrics. Nonetheless, in some cases of interest for statistical applications the Dirichlet process is not an adequate prior choice and alternative nonparametric models need to be devised. In this paper we provide a review of Bayesian nonparametric models that go beyond the Dirichlet process.

    Models beyond the Dirichlet process

    Get PDF
    Bayesian nonparametric inference is a relatively young area of research and it has recently undergone a strong development. Most of its success can be explained by the considerable degree of flexibility it ensures in statistical modelling, if compared to parametric alternatives, and by the emergence of new and efficient simulation techniques that make nonparametric models amenable to concrete use in a number of applied statistical problems. Since its introduction in 1973 by T.S. Ferguson, the Dirichlet process has emerged as a cornerstone in Bayesian nonparametrics. Nonetheless, in some cases of interest for statistical applications the Dirichlet process is not an adequate prior choice and alternative nonparametric models need to be devised. In this paper we provide a review of Bayesian nonparametric models that go beyond the Dirichlet process.

    Building Loss Models

    Get PDF
    This paper is intended as a guide to building insurance risk (loss) models. A typical model for insurance risk, the so-called collective risk model, treats the aggregate loss as having a compound distribution with two main components: one characterizing the arrival of claims and another describing the severity (or size) of loss resulting from the occurrence of a claim. In this paper we first present efficient simulation algorithms for several classes of claim arrival processes. Then we review a collection of loss distributions and present methods that can be used to assess the goodness-of-fit of the claim size distribution. The collective risk model is often used in health insurance and in general insurance, whenever the main risk components are the number of insurance claims and the amount of the claims. It can also be used for modeling other non-insurance product risks, such as credit and operational risk

    A semiparametric Bayesian proportional hazards model for interval censored data with frailty effects

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Multivariate analysis of interval censored event data based on classical likelihood methods is notoriously cumbersome. Likelihood inference for models which additionally include random effects are not available at all. Developed algorithms bear problems for practical users like: matrix inversion, slow convergence, no assessment of statistical uncertainty.</p> <p>Methods</p> <p>MCMC procedures combined with imputation are used to implement hierarchical models for interval censored data within a Bayesian framework.</p> <p>Results</p> <p>Two examples from clinical practice demonstrate the handling of clustered interval censored event times as well as multilayer random effects for inter-institutional quality assessment. The software developed is called survBayes and is freely available at CRAN.</p> <p>Conclusion</p> <p>The proposed software supports the solution of complex analyses in many fields of clinical epidemiology as well as health services research.</p
    corecore