114,102 research outputs found

    Insurability Challenges Under Uncertainty: An Attempt to Use the Artificial Neural Network for the Prediction of Losses from Natural Disasters

    Get PDF
    The main difficulty for natural disaster insurance derives from the uncertainty of an event’s damages. Insurers cannot precisely appreciate the weight of natural hazards because of risk dependences. Insurability under uncertainty first requires an accurate assessment of entire damages. Insured and insurers both win when premiums calculate risk properly. In such cases, coverage will be available and affordable. Using the artificial neural network – a technique rooted in artificial intelligence - insurers can predict annual natural disaster losses. There are many types of artificial neural network models. In this paper we use the multilayer perceptron neural network, the most accommodated to the prediction task. In fact, if we provide the natural disaster explanatory variables to the developed neural network, it calculates perfectly the potential annual losses for the studied country.Natural disaster losses, Insurability, Uncertainty, Multilayer perceptron neural network, Prediction.

    Specifying Weight Priors in Bayesian Deep Neural Networks with Empirical Bayes

    Full text link
    Stochastic variational inference for Bayesian deep neural network (DNN) requires specifying priors and approximate posterior distributions over neural network weights. Specifying meaningful weight priors is a challenging problem, particularly for scaling variational inference to deeper architectures involving high dimensional weight space. We propose MOdel Priors with Empirical Bayes using DNN (MOPED) method to choose informed weight priors in Bayesian neural networks. We formulate a two-stage hierarchical modeling, first find the maximum likelihood estimates of weights with DNN, and then set the weight priors using empirical Bayes approach to infer the posterior with variational inference. We empirically evaluate the proposed approach on real-world tasks including image classification, video activity recognition and audio classification with varying complex neural network architectures. We also evaluate our proposed approach on diabetic retinopathy diagnosis task and benchmark with the state-of-the-art Bayesian deep learning techniques. We demonstrate MOPED method enables scalable variational inference and provides reliable uncertainty quantification.Comment: To be published at AAAI 2020 conferenc

    Unified Probabilistic Neural Architecture and Weight Ensembling Improves Model Robustness

    Full text link
    Robust machine learning models with accurately calibrated uncertainties are crucial for safety-critical applications. Probabilistic machine learning and especially the Bayesian formalism provide a systematic framework to incorporate robustness through the distributional estimates and reason about uncertainty. Recent works have shown that approximate inference approaches that take the weight space uncertainty of neural networks to generate ensemble prediction are the state-of-the-art. However, architecture choices have mostly been ad hoc, which essentially ignores the epistemic uncertainty from the architecture space. To this end, we propose a Unified probabilistic architecture and weight ensembling Neural Architecture Search (UraeNAS) that leverages advances in probabilistic neural architecture search and approximate Bayesian inference to generate ensembles form the joint distribution of neural network architectures and weights. The proposed approach showed a significant improvement both with in-distribution (0.86% in accuracy, 42% in ECE) CIFAR-10 and out-of-distribution (2.43% in accuracy, 30% in ECE) CIFAR-10-C compared to the baseline deterministic approach
    • …
    corecore