2 research outputs found

    A backward selection procedure for approximating a discrete probability distribution by decomposable models

    Get PDF
    summary:Decomposable (probabilistic) models are log-linear models generated by acyclic hypergraphs, and a number of nice properties enjoyed by them are known. In many applications the following selection problem naturally arises: given a probability distribution pp over a finite set VV of nn discrete variables and a positive integer kk, find a decomposable model with tree-width kk that best fits pp. If H\mathcal{H} is the generating hypergraph of a decomposable model and pHp_{\mathcal{H}} is the estimate of pp under the model, we can measure the closeness of pHp_{\mathcal{H}} to pp by the information divergence D(p:pH)D(p: p_{\mathcal{H}}), so that the problem above reads: given pp and kk, find an acyclic, connected hypergraph H{\mathcal{H}} of tree-width kk such that D(p:pH)D(p: p_{\mathcal{H}}) is minimum. It is well-known that this problem is NPNP-hard. However, for k=1k = 1 it was solved by Chow and Liu in a very efficient way; thus, starting from an optimal Chow-Liu solution, a few forward-selection procedures have been proposed with the aim at finding a `good' solution for an arbitrary kk. We propose a backward-selection procedure which starts from the (trivial) optimal solution for k=n−1k=n-1, and we show that, in a study case taken from literature, our procedure succeeds in finding an optimal solution for every kk
    corecore