9,829 research outputs found
Trying again to fail-first
For constraint satisfaction problems (CSPs), Haralick and Elliott [1] introduced the Fail-First Principle and defined in it terms of minimizing branch depth. By devising a range of variable ordering heuristics, each in turn trying harder to fail first, Smith and Grant [2] showed that adherence to this strategy does not guarantee reduction in search effort. The present work builds on Smith and Grant. It benefits from the development of a new framework for characterizing heuristic performance that defines two policies, one concerned with enhancing the likelihood of correctly extending a partial solution, the other with minimizing the effort to prove insolubility. The Fail-First Principle can be restated as calling for adherence to the second, fail-first policy, while discounting the other, promise policy. Our work corrects some deficiencies in the work of Smith and Grant, and goes on to confirm their finding that the Fail-First Principle, as originally defined, is insufficient. We then show that adherence to the fail-first policy must be measured in terms of size of insoluble subtrees, not branch depth. We also show that for soluble problems, both policies must be considered in evaluating heuristic performance. Hence, even in its proper form the Fail-First Principle is insufficient. We also show that the “FF” series of heuristics devised by Smith and Grant is a powerful tool for evaluating heuristic performance, including the subtle relations between heuristic features and adherence to a policy
Experimental Evaluation of Branching Schemes for the CSP
The search strategy of a CP solver is determined by the variable and value
ordering heuristics it employs and by the branching scheme it follows. Although
the effects of variable and value ordering heuristics on search effort have
been widely studied, the effects of different branching schemes have received
less attention. In this paper we study this effect through an experimental
evaluation that includes standard branching schemes such as 2-way, d-way, and
dichotomic domain splitting, as well as variations of set branching where
branching is performed on sets of values. We also propose and evaluate a
generic approach to set branching where the partition of a domain into sets is
created using the scores assigned to values by a value ordering heuristic, and
a clustering algorithm from machine learning. Experimental results demonstrate
that although exponential differences between branching schemes, as predicted
in theory between 2-way and d-way branching, are not very common, still the
choice of branching scheme can make quite a difference on certain classes of
problems. Set branching methods are very competitive with 2-way branching and
outperform it on some problem classes. A statistical analysis of the results
reveals that our generic clustering-based set branching method is the best
among the methods compared.Comment: To appear in the 3rd workshop on techniques for implementing
constraint programming systems (TRICS workshop at the 16th CP Conference),
St. Andrews, Scotland 201
Rational Deployment of CSP Heuristics
Heuristics are crucial tools in decreasing search effort in varied fields of
AI. In order to be effective, a heuristic must be efficient to compute, as well
as provide useful information to the search algorithm. However, some well-known
heuristics which do well in reducing backtracking are so heavy that the gain of
deploying them in a search algorithm might be outweighed by their overhead.
We propose a rational metareasoning approach to decide when to deploy
heuristics, using CSP backtracking search as a case study. In particular, a
value of information approach is taken to adaptive deployment of solution-count
estimation heuristics for value ordering. Empirical results show that indeed
the proposed mechanism successfully balances the tradeoff between decreasing
backtracking and heuristic computational overhead, resulting in a significant
overall search time reduction.Comment: 7 pages, 2 figures, to appear in IJCAI-2011, http://www.ijcai.org
Improving Optimization Bounds using Machine Learning: Decision Diagrams meet Deep Reinforcement Learning
Finding tight bounds on the optimal solution is a critical element of
practical solution methods for discrete optimization problems. In the last
decade, decision diagrams (DDs) have brought a new perspective on obtaining
upper and lower bounds that can be significantly better than classical bounding
mechanisms, such as linear relaxations. It is well known that the quality of
the bounds achieved through this flexible bounding method is highly reliant on
the ordering of variables chosen for building the diagram, and finding an
ordering that optimizes standard metrics is an NP-hard problem. In this paper,
we propose an innovative and generic approach based on deep reinforcement
learning for obtaining an ordering for tightening the bounds obtained with
relaxed and restricted DDs. We apply the approach to both the Maximum
Independent Set Problem and the Maximum Cut Problem. Experimental results on
synthetic instances show that the deep reinforcement learning approach, by
achieving tighter objective function bounds, generally outperforms ordering
methods commonly used in the literature when the distribution of instances is
known. To the best knowledge of the authors, this is the first paper to apply
machine learning to directly improve relaxation bounds obtained by
general-purpose bounding mechanisms for combinatorial optimization problems.Comment: Accepted and presented at AAAI'1
Decompositions of Grammar Constraints
A wide range of constraints can be compactly specified using automata or
formal languages. In a sequence of recent papers, we have shown that an
effective means to reason with such specifications is to decompose them into
primitive constraints. We can then, for instance, use state of the art SAT
solvers and profit from their advanced features like fast unit propagation,
clause learning, and conflict-based search heuristics. This approach holds
promise for solving combinatorial problems in scheduling, rostering, and
configuration, as well as problems in more diverse areas like bioinformatics,
software testing and natural language processing. In addition, decomposition
may be an effective method to propagate other global constraints.Comment: Proceedings of the Twenty-Third AAAI Conference on Artificial
Intelligenc
Our World Isn't Organized into Levels
Levels of organization and their use in science have received increased philosophical attention of late, including challenges to the well-foundedness or widespread usefulness of levels concepts. One kind of response to these challenges has been to advocate a more precise and specific levels concept that is coherent and useful. Another kind of response has been to argue that the levels concept should be taken as a heuristic, to embrace its ambiguity and the possibility of exceptions as acceptable consequences of its usefulness. In this chapter, I suggest that each of these strategies faces its own attendant downsides, and that pursuit of both strategies (by different thinkers) compounds the difficulties. That both kinds of approaches are advocated is, I think, illustrative of the problems plaguing the concept of levels of organization. I end by suggesting that the invocation of levels may mislead scientific and philosophical investigations more than it informs them, so our use of the levels concept should be updated accordingly
- …