9,700 research outputs found

    Solving the Weighted Constraint Satisfaction Problems Via the Neural Network Approach

    Get PDF
    A wide variety of real world optimization problems can be modelled as Weighted Constraint Satisfaction Problems (WCSPs). In this paper, we model this problem in terms of in original 0-1 quadratic programming subject to leaner constraints. View it performance, we use the continuous Hopfield network to solve the obtained model basing on original energy function. To validate our model, we solve several instance of benchmarking WCSP. In this regard, our approach recognizes the optimal solution of the said instances

    Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling

    Get PDF
    Copyright @ 2000 IEEEThis paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.This work was supported by the Chinese National Natural Science Foundation under Grant 69684005 and the Chinese National High-Tech Program under Grant 863-511-9609-003, the EPSRC under Grant GR/L81468

    An improved constraint satisfaction adaptive neural network for job-shop scheduling

    Get PDF
    Copyright @ Springer Science + Business Media, LLC 2009This paper presents an improved constraint satisfaction adaptive neural network for job-shop scheduling problems. The neural network is constructed based on the constraint conditions of a job-shop scheduling problem. Its structure and neuron connections can change adaptively according to the real-time constraint satisfaction situations that arise during the solving process. Several heuristics are also integrated within the neural network to enhance its convergence, accelerate its convergence, and improve the quality of the solutions produced. An experimental study based on a set of benchmark job-shop scheduling problems shows that the improved constraint satisfaction adaptive neural network outperforms the original constraint satisfaction adaptive neural network in terms of computational time and the quality of schedules it produces. The neural network approach is also experimentally validated to outperform three classical heuristic algorithms that are widely used as the basis of many state-of-the-art scheduling systems. Hence, it may also be used to construct advanced job-shop scheduling systems.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/E060722/01 and in part by the National Nature Science Fundation of China under Grant 60821063 and National Basic Research Program of China under Grant 2009CB320601

    Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks

    Get PDF
    Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSPs planar four-color graph coloring, maximum independent set, and Sudoku on this substrate, and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of non-saturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by non-linear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation, and also offer insight into the computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018

    An event-based architecture for solving constraint satisfaction problems

    Full text link
    Constraint satisfaction problems (CSPs) are typically solved using conventional von Neumann computing architectures. However, these architectures do not reflect the distributed nature of many of these problems and are thus ill-suited to solving them. In this paper we present a hybrid analog/digital hardware architecture specifically designed to solve such problems. We cast CSPs as networks of stereotyped multi-stable oscillatory elements that communicate using digital pulses, or events. The oscillatory elements are implemented using analog non-stochastic circuits. The non-repeating phase relations among the oscillatory elements drive the exploration of the solution space. We show that this hardware architecture can yield state-of-the-art performance on a number of CSPs under reasonable assumptions on the implementation. We present measurements from a prototype electronic chip to demonstrate that a physical implementation of the proposed architecture is robust to practical non-idealities and to validate the theory proposed.Comment: First two authors contributed equally to this wor

    Bayesian Optimization with Unknown Constraints

    Full text link
    Recent work on Bayesian optimization has shown its effectiveness in global optimization of difficult black-box objective functions. Many real-world optimization problems of interest also have constraints which are unknown a priori. In this paper, we study Bayesian optimization for constrained problems in the general case that noise may be present in the constraint functions, and the objective and constraints may be evaluated independently. We provide motivating practical examples, and present a general framework to solve such problems. We demonstrate the effectiveness of our approach on optimizing the performance of online latent Dirichlet allocation subject to topic sparsity constraints, tuning a neural network given test-time memory constraints, and optimizing Hamiltonian Monte Carlo to achieve maximal effectiveness in a fixed time, subject to passing standard convergence diagnostics.Comment: 14 pages, 3 figure

    Constrained Deep Networks: Lagrangian Optimization via Log-Barrier Extensions

    Full text link
    This study investigates the optimization aspects of imposing hard inequality constraints on the outputs of CNNs. In the context of deep networks, constraints are commonly handled with penalties for their simplicity, and despite their well-known limitations. Lagrangian-dual optimization has been largely avoided, except for a few recent works, mainly due to the computational complexity and stability/convergence issues caused by alternating explicit dual updates/projections and stochastic optimization. Several studies showed that, surprisingly for deep CNNs, the theoretical and practical advantages of Lagrangian optimization over penalties do not materialize in practice. We propose log-barrier extensions, which approximate Lagrangian optimization of constrained-CNN problems with a sequence of unconstrained losses. Unlike standard interior-point and log-barrier methods, our formulation does not need an initial feasible solution. Furthermore, we provide a new technical result, which shows that the proposed extensions yield an upper bound on the duality gap. This generalizes the duality-gap result of standard log-barriers, yielding sub-optimality certificates for feasible solutions. While sub-optimality is not guaranteed for non-convex problems, our result shows that log-barrier extensions are a principled way to approximate Lagrangian optimization for constrained CNNs via implicit dual variables. We report comprehensive weakly supervised segmentation experiments, with various constraints, showing that our formulation outperforms substantially the existing constrained-CNN methods, both in terms of accuracy, constraint satisfaction and training stability, more so when dealing with a large number of constraints

    Job-shop scheduling with an adaptive neural network and local search hybrid approach

    Get PDF
    This article is posted here with permission from IEEE - Copyright @ 2006 IEEEJob-shop scheduling is one of the most difficult production scheduling problems in industry. This paper proposes an adaptive neural network and local search hybrid approach for the job-shop scheduling problem. The adaptive neural network is constructed based on constraint satisfactions of job-shop scheduling and can adapt its structure and neuron connections during the solving process. The neural network is used to solve feasible schedules for the job-shop scheduling problem while the local search scheme aims to improve the performance by searching the neighbourhood of a given feasible schedule. The experimental study validates the proposed hybrid approach for job-shop scheduling regarding the quality of solutions and the computing speed
    corecore