research

Learning in stochastic neural networks for constraint satisfaction problems

Abstract

Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance

    Similar works