2,021 research outputs found

    Random Costs in Combinatorial Optimization

    Full text link
    The random cost problem is the problem of finding the minimum in an exponentially long list of random numbers. By definition, this problem cannot be solved faster than by exhaustive search. It is shown that a classical NP-hard optimization problem, number partitioning, is essentially equivalent to the random cost problem. This explains the bad performance of heuristic approaches to the number partitioning problem and allows us to calculate the probability distributions of the optimum and sub-optimum costs.Comment: 4 pages, Revtex, 2 figures (eps), submitted to PR

    Phase Transition in the Number Partitioning Problem

    Full text link
    Number partitioning is an NP-complete problem of combinatorial optimization. A statistical mechanics analysis reveals the existence of a phase transition that separates the easy from the hard to solve instances and that reflects the pseudo-polynomiality of number partitioning. The phase diagram and the value of the typical ground state energy are calculated.Comment: minor changes (references, typos and discussion of results

    Phase transition for cutting-plane approach to vertex-cover problem

    Full text link
    We study the vertex-cover problem which is an NP-hard optimization problem and a prototypical model exhibiting phase transitions on random graphs, e.g., Erdoes-Renyi (ER) random graphs. These phase transitions coincide with changes of the solution space structure, e.g, for the ER ensemble at connectivity c=e=2.7183 from replica symmetric to replica-symmetry broken. For the vertex-cover problem, also the typical complexity of exact branch-and-bound algorithms, which proceed by exploring the landscape of feasible configurations, change close to this phase transition from "easy" to "hard". In this work, we consider an algorithm which has a completely different strategy: The problem is mapped onto a linear programming problem augmented by a cutting-plane approach, hence the algorithm operates in a space OUTSIDE the space of feasible configurations until the final step, where a solution is found. Here we show that this type of algorithm also exhibits an "easy-hard" transition around c=e, which strongly indicates that the typical hardness of a problem is fundamental to the problem and not due to a specific representation of the problem.Comment: 4 pages, 3 figure

    Using small MUSes to explain how to solve pen and paper puzzles

    Get PDF
    Pen and paper puzzles like Sudoku, Futoshiki and Skyscrapers are hugely popular. Solving such puzzles can be a trivial task for modern AI systems. However, most AI systems solve problems using a form of backtracking, while people try to avoid backtracking as much as possible. This means that existing AI systems do not output explanations about their reasoning that are meaningful to people. We present Demystify, a tool which allows puzzles to be expressed in a high-level constraint programming language and uses MUSes to allow us to produce descriptions of steps in the puzzle solving. We give several improvements to the existing techniques for solving puzzles with MUSes, which allow us to solve a range of significantly more complex puzzles and give higher quality explanations. We demonstrate the effectiveness and generality of Demystify by comparing its results to documented strategies for solving a range of pen and paper puzzles by hand, showing that our technique can find many of the same explanations.Publisher PD

    Observational constraints on the origin of the elements. V. Non-LTE abundance ratios of [Ni/Fe] in Galactic stars and enrichment by sub-Chandrasekhar mass SNe

    Full text link
    We constrain the role of different SN Ia channels in the chemical enrichment of the Galaxy by studying the abundances of nickel in Galactic stars. We investigate four different SN Ia sub-classes, including the classical single-degenerate near-Chandrasekhar mass SN Ia, the fainter SN Iax systems associated with He accretion from the companion, as well as two sub-Ch mass SN Ia channels. The latter include the double-detonation of a white dwarf accreting helium-rich matter and violent white dwarf mergers. NLTE models of Fe and Ni are used in the abundance analysis. In the GCE models, we include new delay time distributions arising from the different SN Ia channels, as well as recent yields for core-collapse supernovae and AGB stars. The data-model comparison is performed using a Markov chain Monte Carlo framework that allows us to explore the entire parameter space allowed by the diversity of explosion mechanisms and the Galactic SN Ia rate, taking into account the uncertainties of the observed data. We show that NLTE effects have a non-negligible impact on the observed [Ni/Fe] ratios in the Galactic stars. The NLTE corrections to Ni abundances are not large, but strictly positive, lifting the [Ni/Fe] ratios by ~+0.15 dex at [Fe/H] =-2. We find that that the distributions of [Ni/Fe] in LTE and in NLTE are very tight, with a scatter of < 0.1 dex at all metallicities, supporting earlier work. In LTE, most stars have scaled-solar Ni abundances, [Ni/Fe] = 0, with a slight tendency for sub-solar [Ni/Fe] ratios at lower [Fe/H]. In NLTE, however, we find a mild anti-correlation between [Ni/Fe] and metallicity, and a slightly elevated [Ni/Fe] ratios at [Fe/H] < -1.0. The NLTE data can be explained by the GCE models calculated with a substantial, ~ 75%, fraction of sub-Ch SN Ia.Comment: accepted for publication in Astronomy & Astrophysics, abridged version of the abstrac

    CONJURE: automatic generation of constraint models from problem specifications

    Get PDF
    Funding: Engineering and Physical Sciences Research Council (EP/V027182/1, EP/P015638/1), Royal Society (URF/R/180015).When solving a combinatorial problem, the formulation or model of the problem is critical tothe efficiency of the solver. Automating the modelling process has long been of interest because of the expertise and time required to produce an effective model of a given problem. We describe a method to automatically produce constraint models from a problem specification written in the abstract constraint specification language Essence. Our approach is to incrementally refine the specification into a concrete model by applying a chosen refinement rule at each step. Any nontrivial specification may be refined in multiple ways, creating a space of models to choose from. The handling of symmetries is a particularly important aspect of automated modelling. Many combinatorial optimisation problems contain symmetry, which can lead to redundant search. If a partial assignment is shown to be invalid, we are wasting time if we ever consider a symmetric equivalent of it. A particularly important class of symmetries are those introduced by the constraint modelling process: modelling symmetries. We show how modelling symmetries may be broken automatically as they enter a model during refinement, obviating the need for an expensive symmetry detection step following model formulation. Our approach is implemented in a system called Conjure. We compare the models producedby Conjure to constraint models from the literature that are known to be effective. Our empirical results confirm that Conjure can reproduce successfully the kernels of the constraint models of 42 benchmark problems found in the literature.Publisher PDFPeer reviewe

    Towards generic explanations for pen and paper puzzles with MUSes

    Get PDF
    This research was supported by the Royal Society URF\R\180015 .Pen and paper puzzles like Sudoku, Futoshiki and Star Battle are hugely popular. Solving such puzzles can be a trivial task for modern AI systems. However, most AI systems solve problems using a form of backtracking, while people try to avoid backtracking as much as possible. This means that existing AI systems do not output explanations about their reasoning that are meaningful to people. We present Demystify, a tool which allows puzzles to be expressed in a high-level constraint programming language and uses MUSes to allow us to produce descriptions of steps in the puzzle solving. We give several improvements to the existing techniques for solving puzzles with MUSes, which allow us to solve a range of significantly more complex puzzles and give higher quality explanations. We demonstrate the effectiveness and generality of Demystify by comparing its results to documented strategies for solving a range of pen and paper puzzles by hand, showing that our technique can find many of the same explanations.Publisher PD

    New scaling for the alpha effect in slowly rotating turbulence

    Full text link
    Using simulations of slowly rotating stratified turbulence, we show that the alpha effect responsible for the generation of astrophysical magnetic fields is proportional to the logarithmic gradient of kinetic energy density rather than that of momentum, as was previously thought. This result is in agreement with a new analytic theory developed in this paper for large Reynolds numbers. Thus, the contribution of density stratification is less important than that of turbulent velocity. The alpha effect and other turbulent transport coefficients are determined by means of the test-field method. In addition to forced turbulence, we also investigate supernova-driven turbulence and stellar convection. In some cases (intermediate rotation rate for forced turbulence, convection with intermediate temperature stratification, and supernova-driven turbulence) we find that the contribution of density stratification might be even less important than suggested by the analytic theory.Comment: 10 pages, 9 figures, revised version, Astrophys. J., in pres

    Using Small MUSes to Explain How to Solve Pen and Paper Puzzles

    Full text link
    In this paper, we present Demystify, a general tool for creating human-interpretable step-by-step explanations of how to solve a wide range of pen and paper puzzles from a high-level logical description. Demystify is based on Minimal Unsatisfiable Subsets (MUSes), which allow Demystify to solve puzzles as a series of logical deductions by identifying which parts of the puzzle are required to progress. This paper makes three contributions over previous work. First, we provide a generic input language, based on the Essence constraint language, which allows us to easily use MUSes to solve a much wider range of pen and paper puzzles. Second, we demonstrate that the explanations that Demystify produces match those provided by humans by comparing our results with those provided independently by puzzle experts on a range of puzzles. We compare Demystify to published guides for solving a range of different pen and paper puzzles and show that by using MUSes, Demystify produces solving strategies which closely match human-produced guides to solving those same puzzles (on average 89% of the time). Finally, we introduce a new randomised algorithm to find MUSes for more difficult puzzles. This algorithm is focused on optimised search for individual small MUSes
    • …
    corecore