24,285 research outputs found

    Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling

    Get PDF
    Copyright @ 2000 IEEEThis paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.This work was supported by the Chinese National Natural Science Foundation under Grant 69684005 and the Chinese National High-Tech Program under Grant 863-511-9609-003, the EPSRC under Grant GR/L81468

    Random Max-CSPs Inherit Algorithmic Hardness from Spin Glasses

    Get PDF
    We study random constraint satisfaction problems (CSPs) in the unsatisfiable regime. We relate the structure of near-optimal solutions for any Max-CSP to that for an associated spin glass on the hypercube, using the Guerra-Toninelli interpolation from statistical physics. The noise stability polynomial of the CSP's predicate is, up to a constant, the mixture polynomial of the associated spin glass. We prove two main consequences: 1) We relate the maximum fraction of constraints that can be satisfied in a random Max-CSP to the ground state energy density of the corresponding spin glass. Since the latter value can be computed with the Parisi formula, we provide numerical values for some popular CSPs. 2) We prove that a Max-CSP possesses generalized versions of the overlap gap property if and only if the same holds for the corresponding spin glass. We transfer results from Huang et al. [arXiv:2110.07847, 2021] to obstruct algorithms with overlap concentration on a large class of Max-CSPs. This immediately includes local classical and local quantum algorithms.Comment: 41 pages, 1 tabl

    Random Max-CSPs Inherit Algorithmic Hardness from Spin Glasses

    Full text link
    We study random constraint satisfaction problems (CSPs) in the unsatisfiable regime. We relate the structure of near-optimal solutions for any Max-CSP to that for an associated spin glass on the hypercube, using the Guerra-Toninelli interpolation from statistical physics. The noise stability polynomial of the CSP's predicate is, up to a constant, the mixture polynomial of the associated spin glass. We prove two main consequences: 1) We relate the maximum fraction of constraints that can be satisfied in a random Max-CSP to the ground state energy density of the corresponding spin glass. Since the latter value can be computed with the Parisi formula, we provide numerical values for some popular CSPs. 2) We prove that a Max-CSP possesses generalized versions of the overlap gap property if and only if the same holds for the corresponding spin glass. We transfer results from Huang et al. [arXiv:2110.07847, 2021] to obstruct algorithms with overlap concentration on a large class of Max-CSPs. This immediately includes local classical and local quantum algorithms.Comment: 41 pages, 1 tabl

    Robustly Solvable Constraint Satisfaction Problems

    Full text link
    An algorithm for a constraint satisfaction problem is called robust if it outputs an assignment satisfying at least (1g(ε))(1-g(\varepsilon))-fraction of the constraints given a (1ε)(1-\varepsilon)-satisfiable instance, where g(ε)0g(\varepsilon) \rightarrow 0 as ε0\varepsilon \rightarrow 0. Guruswami and Zhou conjectured a characterization of constraint languages for which the corresponding constraint satisfaction problem admits an efficient robust algorithm. This paper confirms their conjecture

    Monte Carlo algorithms are very effective in finding the largest independent set in sparse random graphs

    Full text link
    The effectiveness of stochastic algorithms based on Monte Carlo dynamics in solving hard optimization problems is mostly unknown. Beyond the basic statement that at a dynamical phase transition the ergodicity breaks and a Monte Carlo dynamics cannot sample correctly the probability distribution in times linear in the system size, there are almost no predictions nor intuitions on the behavior of this class of stochastic dynamics. The situation is particularly intricate because, when using a Monte Carlo based algorithm as an optimization algorithm, one is usually interested in the out of equilibrium behavior which is very hard to analyse. Here we focus on the use of Parallel Tempering in the search for the largest independent set in a sparse random graph, showing that it can find solutions well beyond the dynamical threshold. Comparison with state-of-the-art message passing algorithms reveals that parallel tempering is definitely the algorithm performing best, although a theory explaining its behavior is still lacking.Comment: 14 pages, 12 figure
    corecore