269 research outputs found

    Learning Bounded Treewidth Bayesian Networks with Thousands of Variables

    Get PDF
    We present a method for learning treewidth-bounded Bayesian networks from data sets containing thousands of variables. Bounding the treewidth of a Bayesian greatly reduces the complexity of inferences. Yet, being a global property of the graph, it considerably increases the difficulty of the learning process. We propose a novel algorithm for this task, able to scale to large domains and large treewidths. Our novel approach consistently outperforms the state of the art on data sets with up to ten thousand variables

    Credal Model Averaging: dealing robustly with model uncertainty on small data sets.

    Get PDF
    Datasets of population dynamics are typically characterized by a short temporal extension. In this condition, several alternative models typically achieve close accuracy, though returning quite different predictions (model uncertainty ). Bayesian model averaging (BMA) addresses this issue by averaging the prediction of the different models, using as weights the posterior probability of the models. However, an open problem of BMA is the choice of the prior probability of the models, which can largely impact on the inferences, especially when data are scarce. We present Credal Model Averaging (CMA), which addresses this problem by simultaneously considering a set of prior probability distributions over the models. This allows to represent very weak prior knowledge about the appropriateness of the different models and also to easily accommodate expert judgments, considering that in many cases the expert is not willing to commit himself to a single prior probability distribution. The predictions generated by CMA are intervals whose lengths shows the sensitivity of the predictions on the choice of the prior over the models

    Efficient probabilistic reconciliation of forecasts for real-valued and count time series

    Full text link
    Hierarchical time series are common in several applied fields. Forecasts are required to be coherent, that is, to satisfy the constraints given by the hierarchy. The most popular technique to enforce coherence is called reconciliation, which adjusts the base forecasts computed for each time series. However, recent works on probabilistic reconciliation present several limitations. In this paper, we propose a new approach based on conditioning to reconcile any type of forecast distribution. We then introduce a new algorithm, called Bottom-Up Importance Sampling, to efficiently sample from the reconciled distribution. It can be used for any base forecast distribution: discrete, continuous, or in the form of samples, providing a major speedup compared to the current methods. Experiments on several temporal hierarchies show a significant improvement over base probabilistic forecasts.Comment: 25 pages, 4 figure
    • …
    corecore