3,808 research outputs found

    Potentials and Limits of Bayesian Networks to Deal with Uncertainty in the Assessment of Climate Change Adaptation Policies

    Get PDF
    Bayesian networks (BNs) have been increasingly applied to support management and decision-making processes under conditions of environmental variability and uncertainty, providing logical and holistic reasoning in complex systems since they succinctly and effectively translate causal assertions between variables into patterns of probabilistic dependence. Through a theoretical assessment of the features and the statistical rationale of BNs, and a review of specific applications to ecological modelling, natural resource management, and climate change policy issues, the present paper analyses the effectiveness of the BN model as a synthesis framework, which would allow the user to manage the uncertainty characterising the definition and implementation of climate change adaptation policies. The review will let emerge the potentials of the model to characterise, incorporate and communicate the uncertainty, with the aim to provide an efficient support to an informed and transparent decision making process. The possible drawbacks arising from the implementation of BNs are also analysed, providing potential solutions to overcome them.Adaptation to Climate Change, Bayesian Network, Uncertainty

    Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions

    Full text link
    Deep neural networks are widely used for classification. These deep models often suffer from a lack of interpretability -- they are particularly difficult to understand because of their non-linear nature. As a result, neural networks are often treated as "black box" models, and in the past, have been trained purely to optimize the accuracy of predictions. In this work, we create a novel network architecture for deep learning that naturally explains its own reasoning for each prediction. This architecture contains an autoencoder and a special prototype layer, where each unit of that layer stores a weight vector that resembles an encoded training input. The encoder of the autoencoder allows us to do comparisons within the latent space, while the decoder allows us to visualize the learned prototypes. The training objective has four terms: an accuracy term, a term that encourages every prototype to be similar to at least one encoded input, a term that encourages every encoded input to be close to at least one prototype, and a term that encourages faithful reconstruction by the autoencoder. The distances computed in the prototype layer are used as part of the classification process. Since the prototypes are learned during training, the learned network naturally comes with explanations for each prediction, and the explanations are loyal to what the network actually computes.Comment: The first two authors contributed equally, 8 pages, accepted in AAAI 201
    • …
    corecore