15,565 research outputs found

    A New Look at the Easy-Hard-Easy Pattern of Combinatorial Search Difficulty

    Full text link
    The easy-hard-easy pattern in the difficulty of combinatorial search problems as constraints are added has been explained as due to a competition between the decrease in number of solutions and increased pruning. We test the generality of this explanation by examining one of its predictions: if the number of solutions is held fixed by the choice of problems, then increased pruning should lead to a monotonic decrease in search cost. Instead, we find the easy-hard-easy pattern in median search cost even when the number of solutions is held constant, for some search methods. This generalizes previous observations of this pattern and shows that the existing theory does not explain the full range of the peak in search cost. In these cases the pattern appears to be due to changes in the size of the minimal unsolvable subproblems, rather than changing numbers of solutions.Comment: See http://www.jair.org/ for any accompanying file

    Nested quantum search and NP-complete problems

    Full text link
    A quantum algorithm is known that solves an unstructured search problem in a number of iterations of order d\sqrt{d}, where dd is the dimension of the search space, whereas any classical algorithm necessarily scales as O(d)O(d). It is shown here that an improved quantum search algorithm can be devised that exploits the structure of a tree search problem by nesting this standard search algorithm. The number of iterations required to find the solution of an average instance of a constraint satisfaction problem scales as dα\sqrt{d^\alpha}, with a constant α<1\alpha<1 depending on the nesting depth and the problem considered. When applying a single nesting level to a problem with constraints of size 2 such as the graph coloring problem, this constant α\alpha is estimated to be around 0.62 for average instances of maximum difficulty. This corresponds to a square-root speedup over a classical nested search algorithm, of which our presented algorithm is the quantum counterpart.Comment: 18 pages RevTeX, 3 Postscript figure

    Computational Complexity for Physicists

    Full text link
    These lecture notes are an informal introduction to the theory of computational complexity and its links to quantum computing and statistical mechanics.Comment: references updated, reprint available from http://itp.nat.uni-magdeburg.de/~mertens/papers/complexity.shtm

    The Use of Proof Planning for Cooperative Theorem Proving

    Get PDF
    AbstractWe describebarnacle: a co-operative interface to theclaminductive theorem proving system. For the foreseeable future, there will be theorems which cannot be proved completely automatically, so the ability to allow human intervention is desirable; for this intervention to be productive the problem of orienting the user in the proof attempt must be overcome. There are many semi-automatic theorem provers: we call our style of theorem provingco-operative, in that the skills of both human and automaton are used each to their best advantage, and used together may find a proof where other methods fail. The co-operative nature of thebarnacleinterface is made possible by the proof planning technique underpinningclam. Our claim is that proof planning makes new kinds of user interaction possible.Proof planning is a technique for guiding the search for a proof in automatic theorem proving. Common patterns of reasoning in proofs are identified and represented computationally as proof plans, which can then be used to guide the search for proofs of new conjectures. We have harnessed the explanatory power of proof planning to enable the user to understand where the automatic prover got to and why it is stuck. A user can analyse the failed proof in terms ofclam's specification language, and hence override the prover to force or prevent the application of a tactic, or discover a proof patch. This patch might be to apply further rules or tactics to bridge the gap between the effects of previous tactics and the preconditions needed by a currently inapplicable tactic

    Fingerprint databases for theorems

    Full text link
    We discuss the advantages of searchable, collaborative, language-independent databases of mathematical results, indexed by "fingerprints" of small and canonical data. Our motivating example is Neil Sloane's massively influential On-Line Encyclopedia of Integer Sequences. We hope to encourage the greater mathematical community to search for the appropriate fingerprints within each discipline, and to compile fingerprint databases of results wherever possible. The benefits of these databases are broad - advancing the state of knowledge, enhancing experimental mathematics, enabling researchers to discover unexpected connections between areas, and even improving the refereeing process for journal publication.Comment: to appear in Notices of the AM

    Denoising Autoencoders for fast Combinatorial Black Box Optimization

    Full text link
    Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered problem instances, DAE-EDA is considerably faster than BOA and RBM-EDA, sometimes by orders of magnitude. The number of fitness evaluations is higher than for BOA, but competitive with RBM-EDA. These results show that DAEs can be useful tools for problems with low but non-negligible fitness evaluation costs.Comment: corrected typos and small inconsistencie

    Performance of Humans vs. Exploration Algorithms on the Tower of London Test

    Get PDF
    The Tower of London Test (TOL) used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves), healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves) the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test
    • …
    corecore