3,060 research outputs found
Recommended from our members
Experimental evaluation of preprocessing algorithms for constraint satisfaction problems
This paper presents an experimental evaluation of two orthogonal schemes for preprocessing constraint satisfaction problems (CSPs). The first of these schemes involves a class of local consistency techniques that includes directional arc consistency, directional path consistency, and adaptive consistency. The other scheme concerns the prearrangement of variables in a linear order to facilitate an efficient search. In the first series of experiments, we evaluated the effect of each of the local consistency techniques on backtracking and its common enhancement, backjumping. Surprizingly, although adaptive consistency has the best worst-case complexity bounds, we have found that it exhibits the worst performance, unless the constraint graph was very sparse. Directional arc consistency (followed by either backjumping or backtracking) and backjumping (without any pre-processing) outperformed all other techniques; moreover, the former dominated the latter in computationally intensive situations. The second series of experiments suggests that maximum cardinality and minimum width arc the best pre-ordering (i.e., static ordering) strategies, while dynamic search rearrangement is superior to all the preorderings studied
Spectral Detection on Sparse Hypergraphs
We consider the problem of the assignment of nodes into communities from a
set of hyperedges, where every hyperedge is a noisy observation of the
community assignment of the adjacent nodes. We focus in particular on the
sparse regime where the number of edges is of the same order as the number of
vertices. We propose a spectral method based on a generalization of the
non-backtracking Hashimoto matrix into hypergraphs. We analyze its performance
on a planted generative model and compare it with other spectral methods and
with Bayesian belief propagation (which was conjectured to be asymptotically
optimal for this model). We conclude that the proposed spectral method detects
communities whenever belief propagation does, while having the important
advantages to be simpler, entirely nonparametric, and to be able to learn the
rule according to which the hyperedges were generated without prior
information.Comment: 8 pages, 5 figure
Quantum walk speedup of backtracking algorithms
We describe a general method to obtain quantum speedups of classical
algorithms which are based on the technique of backtracking, a standard
approach for solving constraint satisfaction problems (CSPs). Backtracking
algorithms explore a tree whose vertices are partial solutions to a CSP in an
attempt to find a complete solution. Assume there is a classical backtracking
algorithm which finds a solution to a CSP on n variables, or outputs that none
exists, and whose corresponding tree contains T vertices, each vertex
corresponding to a test of a partial solution. Then we show that there is a
bounded-error quantum algorithm which completes the same task using O(sqrt(T)
n^(3/2) log n) tests. In particular, this quantum algorithm can be used to
speed up the DPLL algorithm, which is the basis of many of the most efficient
SAT solvers used in practice. The quantum algorithm is based on the use of a
quantum walk algorithm of Belovs to search in the backtracking tree. We also
discuss how, for certain distributions on the inputs, the algorithm can lead to
an exponential reduction in expected runtime.Comment: 23 pages; v2: minor changes to presentatio
Rational Deployment of CSP Heuristics
Heuristics are crucial tools in decreasing search effort in varied fields of
AI. In order to be effective, a heuristic must be efficient to compute, as well
as provide useful information to the search algorithm. However, some well-known
heuristics which do well in reducing backtracking are so heavy that the gain of
deploying them in a search algorithm might be outweighed by their overhead.
We propose a rational metareasoning approach to decide when to deploy
heuristics, using CSP backtracking search as a case study. In particular, a
value of information approach is taken to adaptive deployment of solution-count
estimation heuristics for value ordering. Empirical results show that indeed
the proposed mechanism successfully balances the tradeoff between decreasing
backtracking and heuristic computational overhead, resulting in a significant
overall search time reduction.Comment: 7 pages, 2 figures, to appear in IJCAI-2011, http://www.ijcai.org
A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search Difficulty
The easy-hard-easy pattern in the difficulty of combinatorial search problems
as constraints are added has been explained as due to a competition between the
decrease in number of solutions and increased pruning. We test the generality
of this explanation by examining one of its predictions: if the number of
solutions is held fixed by the choice of problems, then increased pruning
should lead to a monotonic decrease in search cost. Instead, we find the
easy-hard-easy pattern in median search cost even when the number of solutions
is held constant, for some search methods. This generalizes previous
observations of this pattern and shows that the existing theory does not
explain the full range of the peak in search cost. In these cases the pattern
appears to be due to changes in the size of the minimal unsolvable subproblems,
rather than changing numbers of solutions.Comment: See http://www.jair.org/ for any accompanying file
The backtracking survey propagation algorithm for solving random K-SAT problems
Discrete combinatorial optimization has a central role in many scientific
disciplines, however, for hard problems we lack linear time algorithms that
would allow us to solve very large instances. Moreover, it is still unclear
what are the key features that make a discrete combinatorial optimization
problem hard to solve. Here we study random K-satisfiability problems with
, which are known to be very hard close to the SAT-UNSAT threshold,
where problems stop having solutions. We show that the backtracking survey
propagation algorithm, in a time practically linear in the problem size, is
able to find solutions very close to the threshold, in a region unreachable by
any other algorithm. All solutions found have no frozen variables, thus
supporting the conjecture that only unfrozen solutions can be found in linear
time, and that a problem becomes impossible to solve in linear time when all
solutions contain frozen variables.Comment: 11 pages, 10 figures. v2: data largely improved and manuscript
rewritte
- …