583 research outputs found

    Gradient Boosting in Motor Insurance Claim Frequency Modelling

    Get PDF
    Modelling claim frequency and claim severity are topics of great interest in property-casualty insurance for supporting underwriting, ratemaking, and reserving actuarial decisions. This paper investigates the predictive performance of Gradient Boosting with Decision Trees as base learners to model the claim frequency in motor insurance using a private cross-country large insurance dataset. The Gradient Boosting algorithm combines many weak base learners to tackle conceptual uncertainty in empirical research. The findings show that the Gradient Boosting model is superior to the standard Generalised Linear Model in the sense that it provides closer predictions in the claim frequency model. The finding also shows that Gradient Boosting can capture the nonlinear relation between the claim counts and feature variables and their complex interactions being, thus, a valuable tool for feature engineering and the development of a data-driven approach to risk management

    A survey of methods for explaining black box models

    Get PDF
    In recent years, many accurate decision support systems have been constructed as black boxes, that is as systems that hide their internal logic to the user. This lack of explanation constitutes both a practical and an ethical issue. The literature reports many approaches aimed at overcoming this crucial weakness, sometimes at the cost of sacrificing accuracy for interpretability. The applications in which black box decision systems can be used are various, and each approach is typically developed to provide a solution for a specific problem and, as a consequence, it explicitly or implicitly delineates its own definition of interpretability and explanation. The aim of this article is to provide a classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system. Given a problem definition, a black box type, and a desired explanation, this survey should help the researcher to find the proposals more useful for his own work. The proposed classification of approaches to open black box models should also be useful for putting the many research open questions in perspective

    Communicating model uncertainty for natural hazards:A qualitative systematic thematic review

    Get PDF
    Natural hazard models are vital for all phases of risk assessment and disaster management. However, the high number of uncertainties inherent to these models is highly challenging for crisis communication. The non-communication of these is problematic as interdependencies between them, especially for multi-model approaches and cascading hazards, can result in much larger deep uncertainties. The recent upsurge in research into uncertainty communication makes it important to identify key lessons, areas for future development, and areas for future research. We present a systematic thematic literature review to identify methods for effective communication of model uncertainty. Themes identified include a) the need for clear uncertainty typologies, b) the need for effective engagement with users to identify which uncertainties to focus on, c) managing ensembles, confidence, bias, consensus and dissensus, d) methods for communicating specific uncertainties (e.g., maps, graphs, and time), and e) the lack of evaluation of many approaches currently in use. Finally, we identify lessons and areas for future investigation, and propose a framework to manage the communication of model related uncertainty with decision-makers, by integrating typology components that help identify and prioritise uncertainties. We conclude that scientists must first understand decision-maker needs, and then concentrate efforts on evaluating and communicating the decision-relevant uncertainties. Developing a shared uncertainty management scheme with users facilitates the management of different epistemological perspectives, accommodates the different values that underpin model assumptions and the judgements they prompt, and increases uncertainty tolerance. This is vital, as uncertainties will only increase as our model (and event) complexities increase.</p

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Bayesian calibration of interatomic potentials for binary alloys

    Full text link
    Developing reliable interatomic potential models with quantified predictive accuracy is crucial for atomistic simulations. Commonly used potentials, such as those constructed through the embedded atom method (EAM), are derived from semi-empirical considerations and contain unknown parameters that must be fitted based on training data. In the present work, we investigate Bayesian calibration as a means of fitting EAM potentials for binary alloys. The Bayesian setting naturally assimilates probabilistic assertions about uncertain quantities. In this way, uncertainties about model parameters and model errors can be updated by conditioning on the training data and then carried through to prediction. We apply these techniques to investigate an EAM potential for a family of gold-copper systems in which the training data correspond to density-functional theory values for lattice parameters, mixing enthalpies, and various elastic constants. Through the use of predictive distributions, we demonstrate the limitations of the potential and highlight the importance of statistical formulations for model error.Comment: Preprint, 28 pages, 18 figures, accepted for publication in Computational Materials Science on 7/11/202

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability
    corecore