624 research outputs found

    The Natural Bias of Artificial Instances

    Get PDF
    Many exact and metaheuristic algorithms presented in the literature are tested by comparing their performance in different sets of instances. However, it is known that when these sets of instances are generated randomly, they neither have nor fulfill the features the authors believe they do, which implies that wrong conclusions were made. In this paper, we reinforce the importance of analyzing randomly generated instances by sampling the problem coefficients uniformly at random. We generate instances of the Unconstrained Binary Quadratic Problem and the Number Partitioning Problem. In both cases, we verify that the generated set of instances do not represent a uniform set of instances of the problem. We have conducted several experiments to quantify the number of different rankings of solutions that the problems can generate. We have classified those rankings according to how often each ranking is sampled, how many local optimal solutions each ranking has, and how similar they are.PID2019-104966GB-I00, PID2019-104933GB-I00, PID2019-106453GA-I00 funded by MCIN/AEI/10.13039/501100011033, Basque Government through the program BERC 2022–2025, IT1504-22 and IT1494-22; UPV/EHU through GIU20/054. PRE_2021_2_022

    Hide and Seek: Scaling Machine Learning for Combinatorial Optimization via the Probabilistic Method

    Full text link
    Applying deep learning to solve real-life instances of hard combinatorial problems has tremendous potential. Research in this direction has focused on the Boolean satisfiability (SAT) problem, both because of its theoretical centrality and practical importance. A major roadblock faced, though, is that training sets are restricted to random formulas of size several orders of magnitude smaller than formulas of practical interest, raising serious concerns about generalization. This is because labeling random formulas of increasing size rapidly becomes intractable. By exploiting the probabilistic method in a fundamental way, we remove this roadblock entirely: we show how to generate correctly labeled random formulas of any desired size, without having to solve the underlying decision problem. Moreover, the difficulty of the classification task for the formulas produced by our generator is tunable by varying a simple scalar parameter. This opens up an entirely new level of sophistication for the machine learning methods that can be brought to bear on Satisfiability. Using our generator, we train existing state-of-the-art models for the task of predicting satisfiability on formulas with 10,000 variables. We find that they do no better than random guessing. As a first indication of what can be achieved with the new generator, we present a novel classifier that performs significantly better than random guessing 99% on the same datasets, for most difficulty levels. Crucially, unlike past approaches that learn based on syntactic features of a formula, our classifier performs its learning on a short prefix of a solver's computation, an approach that we expect to be of independent interest

    Design and analysis of provably secure pseudorandom generators

    Get PDF
    • …
    corecore