98,704 research outputs found

    Parametric versus Nonparametric Treatment of Unobserved Heterogeneity in Multivariate Failure Times

    Get PDF
    Two contrary methods for the estimation of a frailty model of multivariate failure times are presented. The assumed Accelerated Failure Time Model includes censored data, observed covariates and unobserved heterogeneity. The parametric estimator maximizes the marginal likelihood whereas the method which does not require distributional assumptions combines the GEE approach (Liang and Zeger, 1986) with the Buckley-James (1979) estimator for censored data. Monte Carlo experiments are conducted to compare the methods under various conditions with regard to bias and efficiency. The ML estimator is found to be rather robust against some misspecifications and both methods seem to be interesting alternatives in uncertain circumstances which lack exact solutions. The methods are applied to data of recurrent purchase acts of yogurt brands

    Monte Carlo Techniques in Computational Stochastic Mechanics

    Get PDF
    A state of the art on simulation methods in stochastic structural analysis is presented. The purpose of the paper is to review some of the di erent methods available for analysing the effects of randomness of models and data in structural analysis. While most of these techniques can be grouped under the general name of Monte Carlo methods, the several published algorithms are more suitable to some objectives of analysis than to others in each case. These objectives have been classiffed into the following cathegories: (1), The Statistical Description of the structural scattering, a primary analysis in which the uncertain parameters are treated as random variables; (2) The consideration of the spatial variability of the random parameters, that must then be modelled as Random Fields (Stochastic Finite Elements); (3) The advanced Monte Carlo methods for calculating the usually very low failure probabilities (Reliability Analysis) and, (4), a deterministic technique that depart from the random nature of the above methods, but which can be linked with them in some cases, known as the Response Surface Method. All of these techniques are critically examined and discussed. The concluding remarks point out some research needs in the field from the authors' point of view

    Comparison of data-driven uncertainty quantification methods for a carbon dioxide storage benchmark scenario

    Full text link
    A variety of methods is available to quantify uncertainties arising with\-in the modeling of flow and transport in carbon dioxide storage, but there is a lack of thorough comparisons. Usually, raw data from such storage sites can hardly be described by theoretical statistical distributions since only very limited data is available. Hence, exact information on distribution shapes for all uncertain parameters is very rare in realistic applications. We discuss and compare four different methods tested for data-driven uncertainty quantification based on a benchmark scenario of carbon dioxide storage. In the benchmark, for which we provide data and code, carbon dioxide is injected into a saline aquifer modeled by the nonlinear capillarity-free fractional flow formulation for two incompressible fluid phases, namely carbon dioxide and brine. To cover different aspects of uncertainty quantification, we incorporate various sources of uncertainty such as uncertainty of boundary conditions, of conceptual model definitions and of material properties. We consider recent versions of the following non-intrusive and intrusive uncertainty quantification methods: arbitary polynomial chaos, spatially adaptive sparse grids, kernel-based greedy interpolation and hybrid stochastic Galerkin. The performance of each approach is demonstrated assessing expectation value and standard deviation of the carbon dioxide saturation against a reference statistic based on Monte Carlo sampling. We compare the convergence of all methods reporting on accuracy with respect to the number of model runs and resolution. Finally we offer suggestions about the methods' advantages and disadvantages that can guide the modeler for uncertainty quantification in carbon dioxide storage and beyond

    Calculating partial expected value of perfect information via Monte Carlo sampling algorithms

    Get PDF
    Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities
    corecore