301,275 research outputs found

    Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM modeling good research practices task force working group - 6

    Get PDF
    A modelā€™s purpose is to inform medical decisions and health care resource allocation. Modelers employ quantitative methods to structure the clinical, epidemiological, and economic evidence base and gain qualitative insight to assist decision makers in making better decisions. From a policy perspective, the value of a model-based analysis lies not simply in its ability to generate a precise point estimate for a specific outcome but also in the systematic examination and responsible reporting of uncertainty surrounding this outcome and the ultimate decision being addressed. Different concepts relating to uncertainty in decision modeling are explored. Stochastic (first-order) uncertainty is distinguished from both parameter (second-order) uncertainty and from heterogeneity, with structural uncertainty relating to the model itself forming another level of uncertainty to consider. The article argues that the estimation of point estimates and uncertainty in parameters is part of a single process and explores the link between parameter uncertainty through to decision uncertainty and the relationship to value-of-information analysis. The article also makes extensive recommendations around the reporting of uncertainty, both in terms of deterministic sensitivity analysis techniques and probabilistic methods. Expected value of perfect information is argued to be the most appropriate presentational technique, alongside cost-effectiveness acceptability curves, for representing decision uncertainty from probabilistic analysis

    Non-Parametric Calibration of Probabilistic Regression

    Full text link
    The task of calibration is to retrospectively adjust the outputs from a machine learning model to provide better probability estimates on the target variable. While calibration has been investigated thoroughly in classification, it has not yet been well-established for regression tasks. This paper considers the problem of calibrating a probabilistic regression model to improve the estimated probability densities over the real-valued targets. We propose to calibrate a regression model through the cumulative probability density, which can be derived from calibrating a multi-class classifier. We provide three non-parametric approaches to solve the problem, two of which provide empirical estimates and the third providing smooth density estimates. The proposed approaches are experimentally evaluated to show their ability to improve the performance of regression models on the predictive likelihood

    The Effects of Framing, Risk, and Uncertainty on Contributions toward a Public Account: Experimental Evidence

    Full text link
    This paper uses laboratory evidence from four strategically equivalent voluntary contribution games to evaluate differences in contributions toward a public account due to framing, risk, and uncertainty. I test four hypotheses. (1) Individuals contribute more to a public account when the dilemma is framed as the mitigation of a public loss than the provision of a public good. (2) Individuals contribute more to a public account when the loss is certain than when faced with the risk of a loss. (3) Individuals contribute more to a public account when the loss is certain than when environmental uncertainty is associated with the public loss. (4) Individuals contribute more to a public account when the probability of loss is known than when the probability of loss is unknown. I find that contributions are greatest when the dilemma is framed as the mitigation of a certain public loss. Contributions diminish when environmental risk and uncertainty are introduced, but remain higher than for public good provision. Preliminary laboratory evidence suggests that government intervention may be more necessary in the provision of a public good than in the mitigation of a public bad. Furthermore, much of the debate surrounding optimal allocations of insurance and infrastructure investment seems to be the result of environmental uncertainty as opposed to strategic uncertainty

    Bureaucratic minimal squawk behaviour: theory and evidence from US regulatory policy

    Get PDF
    Regulators appointed on finite contracts have an incentive to signal their worth to the job market. This paper shows that, if contracts are sufficiently short, this can result in ā€˜minimal squawkā€™ behaviour. That is, regulated firms publicise the quality of unfavourable decisions, aware that regulators then set favourable policies more often to keep their professional reputation intact. Terms of office vary across US states, prompting an empirical test using firm-level data from the regulation of the US electric industry. Consistent with the theory, we find that shorter terms are associated with fewer rate of return reviews and higher residential electricity prices

    Doubly Optimized Calibrated Support Vector Machine (DOC-SVM): an algorithm for joint optimization of discrimination and calibration.

    Get PDF
    Historically, probabilistic models for decision support have focused on discrimination, e.g., minimizing the ranking error of predicted outcomes. Unfortunately, these models ignore another important aspect, calibration, which indicates the magnitude of correctness of model predictions. Using discrimination and calibration simultaneously can be helpful for many clinical decisions. We investigated tradeoffs between these goals, and developed a unified maximum-margin method to handle them jointly. Our approach called, Doubly Optimized Calibrated Support Vector Machine (DOC-SVM), concurrently optimizes two loss functions: the ridge regression loss and the hinge loss. Experiments using three breast cancer gene-expression datasets (i.e., GSE2034, GSE2990, and Chanrion's datasets) showed that our model generated more calibrated outputs when compared to other state-of-the-art models like Support Vector Machine (p=0.03, p=0.13, and p<0.001) and Logistic Regression (p=0.006, p=0.008, and p<0.001). DOC-SVM also demonstrated better discrimination (i.e., higher AUCs) when compared to Support Vector Machine (p=0.38, p=0.29, and p=0.047) and Logistic Regression (p=0.38, p=0.04, and p<0.0001). DOC-SVM produced a model that was better calibrated without sacrificing discrimination, and hence may be helpful in clinical decision making

    Binary Classifier Calibration using an Ensemble of Near Isotonic Regression Models

    Full text link
    Learning accurate probabilistic models from data is crucial in many practical tasks in data mining. In this paper we present a new non-parametric calibration method called \textit{ensemble of near isotonic regression} (ENIR). The method can be considered as an extension of BBQ, a recently proposed calibration method, as well as the commonly used calibration method based on isotonic regression. ENIR is designed to address the key limitation of isotonic regression which is the monotonicity assumption of the predictions. Similar to BBQ, the method post-processes the output of a binary classifier to obtain calibrated probabilities. Thus it can be combined with many existing classification models. We demonstrate the performance of ENIR on synthetic and real datasets for the commonly used binary classification models. Experimental results show that the method outperforms several common binary classifier calibration methods. In particular on the real data, ENIR commonly performs statistically significantly better than the other methods, and never worse. It is able to improve the calibration power of classifiers, while retaining their discrimination power. The method is also computationally tractable for large scale datasets, as it is O(Nlogā”N)O(N \log N) time, where NN is the number of samples
    • ā€¦
    corecore