52,110 research outputs found

    How Good Is Neural Combinatorial Optimization?

    Full text link
    Traditional solvers for tackling combinatorial optimization (CO) problems are usually designed by human experts. Recently, there has been a surge of interest in utilizing Deep Learning, especially Deep Reinforcement Learning, to automatically learn effective solvers for CO. The resultant new paradigm is termed Neural Combinatorial Optimization (NCO). However, the advantages and disadvantages of NCO over other approaches have not been well studied empirically or theoretically. In this work, we present a comprehensive comparative study of NCO solvers and alternative solvers. Specifically, taking the Traveling Salesman Problem as the testbed problem, we assess the performance of the solvers in terms of five aspects, i.e., effectiveness, efficiency, stability, scalability and generalization ability. Our results show that in general the solvers learned by NCO approaches still fall short of traditional solvers in nearly all these aspects. A potential benefit of the former would be their superior time and energy efficiency on small-size problem instances when sufficient training instances are available. We hope this work would help better understand the strengths and weakness of NCO, and provide a comprehensive evaluation protocol for further benchmarking NCO approaches against other approaches

    Meta-Search Through the Space of Representations and Heuristics on a Problem by Problem Basis

    Get PDF
    Two key aspects of problem solving are representation and search heuristics. Both theoretical and experimental studies have shown that there is no one best problem representation nor one best search heuristic. Therefore, some recent methods, e.g., portfolios, learn a good combination of problem solvers to be used in a given domain or set of domains. There are even dynamic portfolios that select a particular combination of problem solvers specific to a problem. These approaches: (1) need to perform a learning step; (2) do not usually focus on changing the representation of the input domain/problem; and (3) frequently do not adapt the portfolio to the specific problem. This paper describes a meta-reasoning system that searches through the space of combinations of representations and heuristics to find one suitable for optimally solving the specific problem. We show that this approach can be better than selecting a combination to use for all problems within a domain and is competitive with state of the art optimal planners

    Using deep learning to construct stochastic local search SAT solvers with performance bounds

    Full text link
    The Boolean Satisfiability problem (SAT) is the most prototypical NP-complete problem and of great practical relevance. One important class of solvers for this problem are stochastic local search (SLS) algorithms that iteratively and randomly update a candidate assignment. Recent breakthrough results in theoretical computer science have established sufficient conditions under which SLS solvers are guaranteed to efficiently solve a SAT instance, provided they have access to suitable "oracles" that provide samples from an instance-specific distribution, exploiting an instance's local structure. Motivated by these results and the well established ability of neural networks to learn common structure in large datasets, in this work, we train oracles using Graph Neural Networks and evaluate them on two SLS solvers on random SAT instances of varying difficulty. We find that access to GNN-based oracles significantly boosts the performance of both solvers, allowing them, on average, to solve 17% more difficult instances (as measured by the ratio between clauses and variables), and to do so in 35% fewer steps, with improvements in the median number of steps of up to a factor of 8. As such, this work bridges formal results from theoretical computer science and practically motivated research on deep learning for constraint satisfaction problems and establishes the promise of purpose-trained SAT solvers with performance guarantees.Comment: 15 pages, 9 figures, code available at https://github.com/porscheofficial/sls_sat_solving_with_deep_learnin

    Using deep generative neural networks to account for model errors in Markov chain Monte Carlo inversion

    Get PDF
    Most geophysical inverse problems are non-linear and rely upon numerical forward solvers involving discretization and simplified representations of the underlying physics. As a result, forward modelling errors are inevitable. In practice, such model errors tend to be either completely ignored, which leads to biased and over-confident inversion results, or only partly taken into account using restrictive Gaussian assumptions. Here, we rely on deep generative neural networks to learn problem-specific low-dimensional probabilistic representations of the discrepancy between high-fidelity and low-fidelity forward solvers. These representations are then used to probabilistically invert for the model error jointly with the target geophysical property field, using the computationally cheap, low-fidelity forward solver. To this end, we combine a Markov chain Monte Carlo (MCMC) inversion algorithm with a trained convolutional neural network of the spatial generative adversarial network (SGAN) type, whereby at each MCMC step, the simulated low-fidelity forward response is corrected using a proposed model-error realization. Considering the crosshole ground-penetrating radar traveltime tomography inverse problem, we train SGAN networks on traveltime discrepancy images between: (1) curved-ray (high fidelity) and straight-ray (low fidelity) forward solvers; and (2) finite-difference-time-domain (high fidelity) and straight-ray (low fidelity) forward solvers. We demonstrate that the SGAN is able to learn the spatial statistics of the model error and that suitable representations of both the subsurface model and model error can be recovered by MCMC. In comparison with inversion results obtained when model errors are either ignored or approximated by a Gaussian distribution, we find that our method has lower posterior parameter bias and better explains the observed traveltime data. Our method is most advantageous when high-fidelity forward solvers involve heavy computational costs and the Gaussian assumption of model errors is inappropriate. Unstable MCMC convergence due to non-linearities introduced by our method remain a challenge to be addressed in future work
    corecore