413 research outputs found

    A Survey of Monte Carlo Tree Search Methods

    Get PDF
    Monte Carlo tree search (MCTS) is a recently proposed search method that combines the precision of tree search with the generality of random sampling. It has received considerable interest due to its spectacular success in the difficult problem of computer Go, but has also proved beneficial in a range of other domains. This paper is a survey of the literature to date, intended to provide a snapshot of the state of the art after the first five years of MCTS research. We outline the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarize the results from the key game and nongame domains to which MCTS methods have been applied. A number of open research questions indicate that the field is ripe for future work

    A Multi-Engine Approach to Answer Set Programming

    Full text link
    Answer Set Programming (ASP) is a truly-declarative programming paradigm proposed in the area of non-monotonic reasoning and logic programming, that has been recently employed in many applications. The development of efficient ASP systems is, thus, crucial. Having in mind the task of improving the solving methods for ASP, there are two usual ways to reach this goal: (i)(i) extending state-of-the-art techniques and ASP solvers, or (ii)(ii) designing a new ASP solver from scratch. An alternative to these trends is to build on top of state-of-the-art solvers, and to apply machine learning techniques for choosing automatically the "best" available solver on a per-instance basis. In this paper we pursue this latter direction. We first define a set of cheap-to-compute syntactic features that characterize several aspects of ASP programs. Then, we apply classification methods that, given the features of the instances in a {\sl training} set and the solvers' performance on these instances, inductively learn algorithm selection strategies to be applied to a {\sl test} set. We report the results of a number of experiments considering solvers and different training and test sets of instances taken from the ones submitted to the "System Track" of the 3rd ASP Competition. Our analysis shows that, by applying machine learning techniques to ASP solving, it is possible to obtain very robust performance: our approach can solve more instances compared with any solver that entered the 3rd ASP Competition. (To appear in Theory and Practice of Logic Programming (TPLP).)Comment: 26 pages, 8 figure

    IASCAR: Incremental Answer Set Counting by Anytime Refinement

    Full text link
    Answer set programming (ASP) is a popular declarative programming paradigm with various applications. Programs can easily have many answer sets that cannot be enumerated in practice, but counting still allows quantifying solution spaces. If one counts under assumptions on literals, one obtains a tool to comprehend parts of the solution space, so-called answer set navigation. However, navigating through parts of the solution space requires counting many times, which is expensive in theory. Knowledge compilation compiles instances into representations on which counting works in polynomial time. However, these techniques exist only for CNF formulas, and compiling ASP programs into CNF formulas can introduce an exponential overhead. This paper introduces a technique to iteratively count answer sets under assumptions on knowledge compilations of CNFs that encode supported models. Our anytime technique uses the inclusion-exclusion principle to improve bounds by over- and undercounting systematically. In a preliminary empirical analysis, we demonstrate promising results. After compiling the input (offline phase), our approach quickly (re)counts.Comment: Under consideration in Theory and Practice of Logic Programming (TPLP

    Modeling the Dust Properties of z ~ 6 Quasars with ART^2 -- All-wavelength Radiative Transfer with Adaptive Refinement Tree

    Full text link
    The detection of large quantities of dust in z ~ 6 quasars by infrared and radio surveys presents puzzles for the formation and evolution of dust in these early systems. Previously (Li et al. 2007), we showed that luminous quasars at z > 6 can form through hierarchical mergers of gas-rich galaxies. Here, we calculate the dust properties of simulated quasars and their progenitors using a three-dimensional Monte Carlo radiative transfer code, ART^2 -- All-wavelength Radiative Transfer with Adaptive Refinement Tree. ART^2 incorporates a radiative equilibrium algorithm for dust emission, an adaptive grid for inhomogeneous density, a multiphase model for the ISM, and a supernova-origin dust model. We reproduce the SED and dust properties of SDSS J1148+5251, and find that the infrared emission are closely associated with the formation and evolution of the quasar host. The system evolves from a cold to a warm ULIRG owing to heating and feedback from stars and AGN. Furthermore, the AGN has significant implications for the interpretation of observation of the hosts. Our results suggest that vigorous star formation in merging progenitors is necessary to reproduce the observed dust properties of z~6 quasars, supporting a merger-driven origin for luminous quasars at high redshifts and the starburst-to-quasar evolutionary hypothesis. (Abridged)Comment: 26 pages, 22 figures, accepted by ApJ. Version with full resolution images is available at http://www.cfa.harvard.edu/~yxli/ARTDUST/astroph0706.3706.pd

    Spartan Daily, November 9, 1951

    Get PDF
    Volume 40, Issue 33https://scholarworks.sjsu.edu/spartandaily/11620/thumbnail.jp

    Spartan Daily, November 9, 1951

    Get PDF
    Volume 40, Issue 33https://scholarworks.sjsu.edu/spartandaily/11620/thumbnail.jp

    Finding similar or diverse solutions in answer set programming: theory and applications

    Get PDF
    For many computational problems, the main concern is to find a best solution (e.g., a most preferred product configuration, a shortest plan, a most parsimonious phylogeny) with respect to some well-described criteria. On the other hand, in many real-world applications, computing a subset of good solutions that are similar/diverse may be desirable for better decision-making. For one reason, the given computational problem may have too many good solutions, and the user may want to examine only a few of them to pick one; in such cases, finding a few similar/diverse good solutions may be useful. Also, in many real-world applications the users usually take into account further criteria that are not included in the formulation of the optimization problem; in such cases, finding a few good solutions that are close to or distant from a particular set of solutions may be useful. With this motivation, we have studied various computational problems related to finding similar/diverse (resp. close/distant) solutions with respect to a given distance function, in the context of Answer Set Programming (ASP). We have introduced novel offline/online computational methods in ASP to solve such computational problems. We have modified an ASP solver according to one of our online methods, providing a useful tool (CLASP-NK) for various ASP applications. We have showed the applicability and effectiveness of our methods/tools in three domains: phylogeny reconstruction, AI planning, and biomedical query answering. Motivated by the promising results, we have developed computational tools to be used by the experts in these areas

    Towards Next Generation Sequential and Parallel SAT Solvers

    Get PDF
    This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology
    • …
    corecore