50 research outputs found
On Finding Optimal Polytrees
Peer reviewe
Efficiently Computing Directed Minimum Spanning Trees
Computing a directed minimum spanning tree, called arborescence, is a
fundamental algorithmic problem, although not as common as its undirected
counterpart. In 1967, Edmonds discussed an elegant solution. It was refined to
run in by Tarjan which is optimal for very dense and
very sparse graphs. Gabow et al.~gave a version of Edmonds' algorithm that runs
in , thus asymptotically beating the Tarjan variant in the
regime between sparse and dense. Despite the attention the problem received
theoretically, there exists, to the best of our knowledge, no empirical
evaluation of either of these algorithms. In fact, the version by Gabow et
al.~has never been implemented and, aside from coding competitions, all readily
available Tarjan implementations run in . In this paper, we provide the
first implementation of the version by Gabow et al.~as well as five variants of
Tarjan's version with different underlying data structures. We evaluate these
algorithms and existing solvers on a large set of real-world and random graphs
The Markov chain tree theorem and the state reduction algorithm in commutative semirings
We extend the Markov chain tree theorem to general commutative semirings, and
we generalize the state reduction algorithm to commutative semifields. This
leads to a new universal algorithm, whose prototype is the state reduction
algorithm which computes the Markov chain tree vector of a stochastic matrix.Comment: 13 page
The Markov Chain Tree Theorem in commutative semirings and the State Reduction Algorithm in commutative semifields
We extend the Markov Chain Tree Theorem to general commutative semirings, and we generalize the State Reduction Algorithm to general commutative semifields. This leads to a new universal algorithm, whose prototype is the State Reduction Algorithm which computes the Markov chain tree vector of a stochastic matrix
Structured prediction models via the matrix-tree theorem
This paper provides an algorithmic framework for learning statistical models involving directed spanning trees, or equivalently non-projective dependency structures. We show how partition functions and marginals for directed spanning trees can be computed by an adaptation of Kirchhoff’s Matrix-Tree Theorem. To demonstrate an application of the method, we perform experiments which use the algorithm in training both log-linear and max-margin dependency parsers. The new training methods give improvements in accuracy over perceptron-trained models.Peer ReviewedPostprint (author’s final draft