53 research outputs found

    People Efficiently Explore the Solution Space of the Computationally Intractable Traveling Salesman Problem to Find Near-Optimal Tours

    Get PDF
    Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics

    Exploiting Bounds in Operations Research and Artificial Intelligence

    Get PDF
    Combinatorial optimization problems are ubiquitous in scientiïŹc research, engineering, and even our daily lives. A major research focus in developing combinatorial search algorithms has been on the attainment of efïŹcient methods for deriving tight lower and upper bounds. These bounds restrict the search space of combinatorial optimization problems and facilitate the computa-tion of what might otherwise be intractable problems. In this paper, we survey the history of the use of bounds in both AI and OR. While research has been extensive in both domains, until very recently it has been too narrowly focused and has overlooked great opportunities to exploit bounds. In the past, the focus has been on the relaxations of constraints. We present methods for deriving bounds by tightening constraints, adding or deleting decision variables, and modifying the objective function. Then a formalization of the use of bounds as a two-step procedure is introduced. Finally, we discuss recent developments demonstrating how the use of this framework is conducive for eliciting methods that go beyond search-tree pruning

    A Preference-Based Approach to Backbone Computation with Application to Argumentation

    Get PDF
    The backbone of a constraint satisfaction problem consists of those variables that take the same value in all solutions. Algorithms for determining the backbone of propositional formulas, i.e., Boolean satisfiability (SAT) instances, find various real-world applications. From the knowledge representation and reasoning (KRR) perspective, one interesting connection is that of backbones and the so-called ideal semantics in abstract argumentation. In this paper, we propose a new backbone algorithm which makes use of a "SAT with preferences" solver, i.e., a SAT solver which is guaranteed to output a most preferred satisfying assignment w.r.t. a given preference over literals of the SAT instance at hand. We also show empirically that the proposed approach is specifically effective in computing the ideal semantics of argumentation frameworks, noticeably outperforming an other state-of-the-art backbone solver as well as the winning approach of the recent ICCMA 2017 argumentation solver competition in the ideal semantics track.Peer reviewe

    Learning Computer Programs with the Bayesian Optimization Algorithm

    Get PDF
    The hierarchical Bayesian Optimization Algorithm (hBOA) [24, 25] learns bit-strings by constructing explicit centralized models of a population and using them to generate new instances. This thesis is concerned with extending hBOA to learning open-ended program trees. The new system, BOA programming (BOAP), improves on previous probabilistic model building GP systems (PMBGPs) in terms of the expressiveness and open-ended ïŹ‚exibility of the models learned, and hence control over the distribution of individuals generated. BOAP is studied empirically on a toy problem (learning linear functions) in various conïŹgurations, and further experimental results are presented for two real-world problems: prediction of sunspot time series, and human gene function inference

    Generalized partition crossover for the traveling salesman problem

    Get PDF
    2011 Spring.Includes bibliographical references.The Traveling Salesman Problem (TSP) is a well-studied combinatorial optimization problem with a wide spectrum of applications and theoretical value. We have designed a new recombination operator known as Generalized Partition Crossover (GPX) for the TSP. GPX is unique among other recombination operators for the TSP in that recombining two local optima produces new local optima with a high probability. Thus the operator can 'tunnel' between local optima without the need for intermediary solutions. The operator is respectful, meaning that any edges common between the two parent solutions are present in the offspring, and transmits alleles, meaning that offspring are comprised only of edges found in the parent solutions. We design a hybrid genetic algorithm, which uses local search in addition to recombination and selection, specifically for GPX. We show that this algorithm outperforms Chained Lin-Kernighan, a state-of-the-art approximation algorithm for the TSP. We next analyze these algorithms to determine why the algorithms are not capable of consistently finding a globally optimal solution. Our results reveal a search space structure which we call 'funnels' because they are analogous to the funnels found in continuous optimization. Funnels are clusters of tours in the search space that are separated from one another by a non-trivial distance. We find that funnels can trap Chained Lin-Kernighan, preventing the search from finding an optimal solution. Our data indicate that, under certain conditions, GPX can tunnel between funnels, explaining the higher frequency of optimal solutions produced by our hybrid genetic algorithm using GPX

    Competent Program Evolution, Doctoral Dissertation, December 2006

    Get PDF
    Heuristic optimization methods are adaptive when they sample problem solutions based on knowledge of the search space gathered from past sampling. Recently, competent evolutionary optimization methods have been developed that adapt via probabilistic modeling of the search space. However, their effectiveness requires the existence of a compact problem decomposition in terms of prespecified solution parameters. How can we use these techniques to effectively and reliably solve program learning problems, given that program spaces will rarely have compact decompositions? One method is to manually build a problem-specific representation that is more tractable than the general space. But can this process be automated? My thesis is that the properties of programs and program spaces can be leveraged as inductive bias to reduce the burden of manual representation-building, leading to competent program evolution. The central contributions of this dissertation are a synthesis of the requirements for competent program evolution, and the design of a procedure, meta-optimizing semantic evolutionary search (MOSES), that meets these requirements. In support of my thesis, experimental results are provided to analyze and verify the effectiveness of MOSES, demonstrating scalability and real-world applicability

    Learning to Optimize: from Theory to Practice

    Get PDF
    Optimization is at the heart of everyday applications, from finding the fastest route for navigation to designing efficient drugs for diseases. The study of optimization algorithms has focused on developing general approaches that do not adapt to specific problem instances. While they enjoy wide applicability, they forgo the potentially useful information embedded in the structure of an instance. Furthermore, as new optimization problems appear, the algorithm development process relies heavily on domain expertise to identify special properties and design methods to exploit them. Such design philosophy is labor-intensive and difficult to deploy efficiently to a broad range of domain-specific optimization problems, which are becoming ubiquitous in the pursuit of ever more personalized applications. In this dissertation, we consider different hybrid versions of classical optimization algorithms with data-driven techniques. We aim to equip classical algorithms with the ability to adapt their behaviors on the fly based on specific problem instances. A common theme in our approaches is to train the data-driven components on a pre-collected batch of representative problem instances to optimize some performance metrics, e.g., wall-clock time. Varying the integration details, we present several approaches to learning data-driven optimization modules for combinatorial optimization problems and study the corresponding fundamental research questions on policy learning. We provide multiple practical experimental results to showcase the practicality of our methods which lead to state-of-the-art performance on some classes of problems.</p
    • 

    corecore