22,967 research outputs found

    Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM modeling good research practices task force working group - 6

    Get PDF
    A model’s purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value-of-information analysis. The article also makes extensive recommendations around the reporting of uncertainty, both in terms of deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis

    The value of myocardial perfusion scintigraphy in the diagnosis and management of angina and myocardial infarction : a probabilistic analysis

    Get PDF
    Background and Aim. Coronary heart disease (CHD) is the most common cause of death in the United Kingdom, accounting for more than 120,000 deaths in 2001, among the highest rates in the world. This study reports an economic evaluation of single photon emission computed tomography myocardial perfusion scintigraphy (SPECT) for the diagnosis and management of coronary artery disease (CAD). Methods. Strategies involving SPECT with and without stress electrocardiography (ECG) and coronary angiography (CA) were compared to diagnostic strategies not involving SPECT. The diagnosis decision was modelled with a decision tree model and long-term costs and consequences using a Markov model. Data to populate the models were obtained from a series of systematic reviews. Unlike earlier evaluations, a probabilistic analysis was included to assess the statistical imprecision of the results. The results are presented in terms of incremental cost per quality-adjusted life year (QALY). Results. At prevalence levels of CAD of 10.5%, SPECT-based strategies are costeffective; ECG-CA is highly unlikely to be optimal. At a ceiling ratio of _20,000 per QALY, SPECT-CA has a 90% likelihood of being optimal. Beyond this threshold, this strategy becomes less likely to be cost-effective. At more than _75,000 per QALY, coronary angiography is most likely to be optimal. For higher levels of prevalence (around 50%) and more than a _10,000 per QALY threshold, coronary angiography is the optimal decision. Conclusions. SPECTbased strategies are likely to be cost-effective when risk of CAD is modest (10.5%). Sensitivity analyses show these strategies dominated non-SPECT-based strategies for risk of CAD up to 4%. At higher levels of prevalence, invasive strategies may become worthwhile. Finally, sensitivity analyses show stress echocardiography as a potentially costeffective option, and further research to assess the relative cost-effectiveness of echocardiography should also be performed.This article was developed from a Technology Assessment Review conducted on behalf of the National Institute for Clinical Excellence (NICE) and was funded by the Department of Health on a grant administered by the National Coordinating Centre for Health Technology Assessment. The Health Economics Research Unit and the Health Services Research Unit are core funded by the Chief Scientist Office of the Scottish Executive Health Department.Peer reviewedAuthor versio

    Calculating the Expected Value of Sample Information in Practice: Considerations from Three Case Studies

    Full text link
    Investing efficiently in future research to improve policy decisions is an important goal. Expected Value of Sample Information (EVSI) can be used to select the specific design and sample size of a proposed study by assessing the benefit of a range of different studies. Estimating EVSI with the standard nested Monte Carlo algorithm has a notoriously high computational burden, especially when using a complex decision model or when optimizing over study sample sizes and designs. Therefore, a number of more efficient EVSI approximation methods have been developed. However, these approximation methods have not been compared and therefore their relative advantages and disadvantages are not clear. A consortium of EVSI researchers, including the developers of several approximation methods, compared four EVSI methods using three previously published health economic models. The examples were chosen to represent a range of real-world contexts, including situations with multiple study outcomes, missing data, and data from an observational rather than a randomized study. The computational speed and accuracy of each method were compared, and the relative advantages and implementation challenges of the methods were highlighted. In each example, the approximation methods took minutes or hours to achieve reasonably accurate EVSI estimates, whereas the traditional Monte Carlo method took weeks. Specific methods are particularly suited to problems where we wish to compare multiple proposed sample sizes, when the proposed sample size is large, or when the health economic model is computationally expensive. All the evaluated methods gave estimates similar to those given by traditional Monte Carlo, suggesting that EVSI can now be efficiently computed with confidence in realistic examples.Comment: 11 pages, 3 figure

    Seismic vulnerability assessment on a territorial scale based on a Bayesian approach

    Get PDF
    Italian historical centres are mostly characterized by aggregate buildings. As defined by the Italian codes (Norme Tecniche per le Costruzioni 2008 and Circolare n. 617), the analysis of the most representative local mechanisms of collapse must be performed in order to assess their vulnerability. In this article, the out-of-plane local mechanisms of collapse analysis is implemented by applying a new method of analysis based on a probabilistic approach. Usually information which are necessary for the implementation of the local mechanisms analyses are affected by uncertainty or are missing, therefore in lots of cases it is only possible to hypothesize them on the basis of the other buildings information collected during the on-site survey. In this context, the implementation of a Bayesian approach allows to deduce buildings lacking information (i.e. wall thickness and interstorey height) starting from certain collected data (i.e. facades height). The historical centre of Timisoara (Romania) is selected as the case study for the implementation of this new method of analysis, given the extension of the on-site survey already carried out in the area (information about more than 200 structural units have been collected) and the seismic vulnerability assessment on an urban scale already performed by applying a traditional method. Results obtained by adopting the two approaches are then compared and a validation and a calibration of the new one is carried out

    The MaxEnt method for probabilistic structural fire engineering : performance for multi-modal outputs

    Get PDF
    Probabilistic Risk Assessment (PRA) methodologies are gaining traction in fire engineering practice as a (necessary) means to demonstrate adequate safety for uncommon buildings. Further, an increasing number of applications of PRA based methodologies in structural fire engineering can be found in the contemporary literature. However, to date, the combination of probabilistic methods and advanced numerical fire engineering tools has been limited due to the absence of a methodology which is both efficient (i.e. requires a limited number of model evaluations) and unbiased (i.e. without prior assumptions regarding the output distribution type). An uncertainty quantification methodology (termed herein as MaxEnt) has recently been presented targeted at an unbiased assessment of the model output probability density function (PDF), using only a limited number of model evaluations. The MaxEnt method has been applied to structural fire engineering problems, with some applications benchmarked against Monte Carlo Simulations (MCS) which showed excellent agreement for single-modal distributions. However, the power of the method is in application for those cases where ‘validation’ is not computationally practical, e.g. uncertainty quantification for problems reliant upon complex modes (such as FEA or CFD). A recent study by Gernay, et al., applied the MaxEnt method to determine the PDF of maximum permissible applied load supportable by a steel-composite slab panel undergoing tensile membrane action (TMA) when subject to realistic (parametric) fire exposures. The study incorporated uncertainties in both the manifestation of the fire and the mechanical material parameters. The output PDF of maximum permissible load was found to be bi-modal, highlighting different failure modes depending upon the combinations of stochastic parameters. Whilst this outcome highlighted the importance of an un-biased approximation of the output PDF, in the absence of a MCS benchmark the study concluded that some additional studies are warranted to give users confidence and guidelines in such situations when applying the MaxEnt method. This paper summarises one further study, building upon Case C as presented in Gernay, et al

    A spatially distributed model for foreground segmentation

    Get PDF
    Foreground segmentation is a fundamental first processing stage for vision systems which monitor real-world activity. In this paper we consider the problem of achieving robust segmentation in scenes where the appearance of the background varies unpredictably over time. Variations may be caused by processes such as moving water, or foliage moved by wind, and typically degrade the performance of standard per-pixel background models. Our proposed approach addresses this problem by modeling homogeneous regions of scene pixels as an adaptive mixture of Gaussians in color and space. Model components are used to represent both the scene background and moving foreground objects. Newly observed pixel values are probabilistically classified, such that the spatial variance of the model components supports correct classification even when the background appearance is significantly distorted. We evaluate our method over several challenging video sequences, and compare our results with both per-pixel and Markov Random Field based models. Our results show the effectiveness of our approach in reducing incorrect classifications

    Beyond English text: Multilingual and multimedia information retrieval.

    Get PDF
    Non
    corecore