5 research outputs found

    Improving Forecast Accuracy by Guided Manual Overwrite in Forecast Debiasing

    Get PDF
    We present ongoing work on a model-driven decision support system (DSS) that is aimed at providing guidance on reflecting and adjusting judgmental forecasts. We consider judgmental forecasts of cash flows generated by local experts in numerous subsidiaries of an international corporation. Forecasts are generated in a decentralized, non-standardized fashion, and corporate managers and controllers then aggregate the forecasts to derive consolidated, corporate-wide plans to manage liquidity and foreign exchange risk. However, it is well-known that judgmental predictions are often biased, where then statistical debiasing techniques can be applied to improve forecast accuracy. Even though debiasing can improve average forecast accuracy, many originally appropriate forecasts may be automatically corrected in the wrong direction, for instance, in cases where a forecaster might have considered knowledge on future events not derivable statistically from past time series. To prevent high-impact erroneous corrections, we propose to prompt a forecaster for action upon submission of a forecast that is out of the confidence bounds of a benchmark forecast. The benchmark forecast is derived from a statistical debiasing model that considers the past error patterns of a forecaster. Bounds correspond to percentiles of the error distribution of the debiased forecast. We discuss the determination of the confidence bounds and the selection of suspicious judgmental forecasts, types of (statistical) feedback to the forecasters, and the incorporation of the forecaster’s reactions (comments, revisions) in future debiasing strategies

    Feeding-Back Error Patterns to Stimulate Self-Reflection versus Automated Debiasing of Judgments

    Get PDF
    Automated debiasing, referring to automatic statistical correction of human estimations, can improve accuracy, whereby benefits are limited by cases where experts derive accurate judgments but are then falsely "corrected". We present ongoing work on a feedback-based decision support system that learns a statistical model for correcting identified error patterns observed on judgments of an expert. The model is then mirrored to the expert as feedback to stimulate self-reflection and selective adjustment of further judgments instead of using it for auto-debiasing. Our assumption is that experts are capable to incorporate the feedback wisely when making another judgment to reduce overall error levels and mitigate this false-correction problem. To test the assumption, we present the design and results of a pilot-experiment conducted. Results indicate that subjects indeed use the feedback wisely and selectively to improve their judgments and overall accuracy

    How to Conduct Rigorous Supervised Machine Learning in Information Systems Research: The Supervised Machine Learning Reportcard [in press]

    Get PDF
    Within the last decade, the application of supervised machine learning (SML) has become increasingly popular in the field of information systems (IS) research. Although the choices among different data preprocessing techniques, as well as different algorithms and their individual implementations, are fundamental building blocks of SML results, their documentation—and therefore reproducibility—is inconsistent across published IS research papers. This may be quite understandable, since the goals and motivations for SML applications vary and since the field has been rapidly evolving within IS. For the IS research community, however, this poses a big challenge, because even with full access to the data neither a complete evaluation of the SML approaches nor a replication of the research results is possible. Therefore, this article aims to provide the IS community with guidelines for comprehensively and rigorously conducting, as well as documenting, SML research: First, we review the literature concerning steps and SML process frameworks to extract relevant problem characteristics and relevant choices to be made in the application of SML. Second, we integrate these into a comprehensive “Supervised Machine Learning Reportcard (SMLR)” as an artifact to be used in future SML endeavors. Third, we apply this reportcard to a set of 121 relevant articles published in renowned IS outlets between 2010 and 2018 and demonstrate how and where the documentation of current IS research articles can be improved. Thus, this work should contribute to a more complete and rigorous application and documentation of SML approaches, thereby enabling a deeper evaluation and reproducibility / replication of results in IS research

    Bayesian hierarchical modelling for structured expert judgement

    Get PDF
    Decision makers will often approach experts to help understand uncertainty when their problems cannot be analysed through empirical data alone. When formalised, this process is known as Structured Expert Judgement (SEJ). Despite the fundamental premise of SEJ being about updating belief, which is the core of Bayesian statistics, SEJ studies often do not consider the Bayesian paradigm. Most SEJ studies utilise techniques which essentially take a pragmatic view of probability (e.g. Cooke's Classical model). Bayesian models have been proposed historically but are used rarely in practice. This thesis outlines a Bayesian framework for SEJ. The research details the structure of an SEJ study and notes the benefits and limitations of traditional expert aggregation techniques. A collection of recently proposed Bayesian models are highlighted, before presenting a new model which aims to combine and enhance the best of these existing frameworks. In particular, clustering, calibrating and aggregating experts' judgements utilising a Supra-Bayesian parameter updating approach combined with either an agglomerative hierarchical clustering or an embedded Dirichlet process mixture model. The new approach is assessed by analysing data from existing studies in a variety of domains including healthcare, climatology, volcanology and environmental management. These studies highlight significant overconfidence in expert assessments and consequently a wider range of uncertainty when considering the Bayesian approach. Cross-validation of over twenty studies demonstrates that the Bayesian approach generates higher statistical accuracy than performance weighting but at the cost of lost information. Key process considerations when implementing a Bayesian model within a broader study facilitation protocol are outlined. A mechanism to embed the new Bayesian model into the popular idea protocol is proposed. A new tool, beam - (B)ayesian (E)xpert (A)ggregation (M)odel, to allow easy deployment of Bayesian thinking into idea is presented. Finally, some areas for further research are recommended
    corecore