3,530 research outputs found
The cavity approach for Steiner trees packing problems
The Belief Propagation approximation, or cavity method, has been recently
applied to several combinatorial optimization problems in its zero-temperature
implementation, the max-sum algorithm. In particular, recent developments to
solve the edge-disjoint paths problem and the prize-collecting Steiner tree
problem on graphs have shown remarkable results for several classes of graphs
and for benchmark instances. Here we propose a generalization of these
techniques for two variants of the Steiner trees packing problem where multiple
"interacting" trees have to be sought within a given graph. Depending on the
interaction among trees we distinguish the vertex-disjoint Steiner trees
problem, where trees cannot share nodes, from the edge-disjoint Steiner trees
problem, where edges cannot be shared by trees but nodes can be members of
multiple trees. Several practical problems of huge interest in network design
can be mapped into these two variants, for instance, the physical design of
Very Large Scale Integration (VLSI) chips. The formalism described here relies
on two components edge-variables that allows us to formulate a massage-passing
algorithm for the V-DStP and two algorithms for the E-DStP differing in the
scaling of the computational time with respect to some relevant parameters. We
will show that one of the two formalisms used for the edge-disjoint variant
allow us to map the max-sum update equations into a weighted maximum matching
problem over proper bipartite graphs. We developed a heuristic procedure based
on the max-sum equations that shows excellent performance in synthetic networks
(in particular outperforming standard multi-step greedy procedures by large
margins) and on large benchmark instances of VLSI for which the optimal
solution is known, on which the algorithm found the optimum in two cases and
the gap to optimality was never larger than 4 %
Barrier Frank-Wolfe for Marginal Inference
We introduce a globally-convergent algorithm for optimizing the
tree-reweighted (TRW) variational objective over the marginal polytope. The
algorithm is based on the conditional gradient method (Frank-Wolfe) and moves
pseudomarginals within the marginal polytope through repeated maximum a
posteriori (MAP) calls. This modular structure enables us to leverage black-box
MAP solvers (both exact and approximate) for variational inference, and obtains
more accurate results than tree-reweighted algorithms that optimize over the
local consistency relaxation. Theoretically, we bound the sub-optimality for
the proposed algorithm despite the TRW objective having unbounded gradients at
the boundary of the marginal polytope. Empirically, we demonstrate the
increased quality of results found by tightening the relaxation over the
marginal polytope as well as the spanning tree polytope on synthetic and
real-world instances.Comment: 25 pages, 12 figures, To appear in Neural Information Processing
Systems (NIPS) 2015, Corrected reference and cleaned up bibliograph
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
There has been significant recent interest in parallel graph processing due
to the need to quickly analyze the large graphs available today. Many graph
codes have been designed for distributed memory or external memory. However,
today even the largest publicly-available real-world graph (the Hyperlink Web
graph with over 3.5 billion vertices and 128 billion edges) can fit in the
memory of a single commodity multicore server. Nevertheless, most experimental
work in the literature report results on much smaller graphs, and the ones for
the Hyperlink graph use distributed or external memory. Therefore, it is
natural to ask whether we can efficiently solve a broad class of graph problems
on this graph in memory.
This paper shows that theoretically-efficient parallel graph algorithms can
scale to the largest publicly-available graphs using a single machine with a
terabyte of RAM, processing them in minutes. We give implementations of
theoretically-efficient parallel algorithms for 20 important graph problems. We
also present the optimizations and techniques that we used in our
implementations, which were crucial in enabling us to process these large
graphs quickly. We show that the running times of our implementations
outperform existing state-of-the-art implementations on the largest real-world
graphs. For many of the problems that we consider, this is the first time they
have been solved on graphs at this scale. We have made the implementations
developed in this work publicly-available as the Graph-Based Benchmark Suite
(GBBS).Comment: This is the full version of the paper appearing in the ACM Symposium
on Parallelism in Algorithms and Architectures (SPAA), 201
Learning Generalized Reactive Policies using Deep Neural Networks
We present a new approach to learning for planning, where knowledge acquired
while solving a given set of planning problems is used to plan faster in
related, but new problem instances. We show that a deep neural network can be
used to learn and represent a \emph{generalized reactive policy} (GRP) that
maps a problem instance and a state to an action, and that the learned GRPs
efficiently solve large classes of challenging problem instances. In contrast
to prior efforts in this direction, our approach significantly reduces the
dependence of learning on handcrafted domain knowledge or feature selection.
Instead, the GRP is trained from scratch using a set of successful execution
traces. We show that our approach can also be used to automatically learn a
heuristic function that can be used in directed search algorithms. We evaluate
our approach using an extensive suite of experiments on two challenging
planning problem domains and show that our approach facilitates learning
complex decision making policies and powerful heuristic functions with minimal
human input. Videos of our results are available at goo.gl/Hpy4e3
- …