5 research outputs found

    Inference by Minimizing Size, Divergence, or their Sum

    Full text link
    We speed up marginal inference by ignoring factors that do not significantly contribute to overall accuracy. In order to pick a suitable subset of factors to ignore, we propose three schemes: minimizing the number of model factors under a bound on the KL divergence between pruned and full models; minimizing the KL divergence under a bound on factor count; and minimizing the weighted sum of KL divergence and factor count. All three problems are solved using an approximation of the KL divergence than can be calculated in terms of marginals computed on a simple seed graph. Applied to synthetic image denoising and to three different types of NLP parsing models, this technique performs marginal inference up to 11 times faster than loopy BP, with graph sizes reduced up to 98%-at comparable error in marginals and parsing accuracy. We also show that minimizing the weighted sum of divergence and size is substantially faster than minimizing either of the other objectives based on the approximation to divergence presented here.Comment: Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010

    Compatible and incompatible abstractions in Bayesian networks

    Get PDF
    The graphical structure of a Bayesian network (BN) makes it a technology well-suited for developing decision support models from a combination of domain knowledge and data. The domain knowledge of experts is used to determine the graphical structure of the BN, corresponding to the relationships and between variables, and data is used for learning the strength of these relationships. However, the available data seldom match the variables in the structure that is elicited from experts, whose models may be quite detailed; consequently, the structure needs to be abstracted to match the data. Up to now, this abstraction has been informal, loosening the link between the final model and the experts' knowledge. In this paper, we propose a method for abstracting the BN structure by using four 'abstraction' operations: node removal, node merging, state-space collapsing and edge removal. Some of these steps introduce approximations, which can be identified from changes in the set of conditional independence (CI) assertions of a network
    corecore