23,879 research outputs found
Learning in stochastic neural networks for constraint satisfaction problems
Researchers describe a newly-developed artificial neural network algorithm for solving constraint satisfaction problems (CSPs) which includes a learning component that can significantly improve the performance of the network from run to run. The network, referred to as the Guarded Discrete Stochastic (GDS) network, is based on the discrete Hopfield network but differs from it primarily in that auxiliary networks (guards) are asymmetrically coupled to the main network to enforce certain types of constraints. Although the presence of asymmetric connections implies that the network may not converge, it was found that, for certain classes of problems, the network often quickly converges to find satisfactory solutions when they exist. The network can run efficiently on serial machines and can find solutions to very large problems (e.g., N-queens for N as large as 1024). One advantage of the network architecture is that network connection strengths need not be instantiated when the network is established: they are needed only when a participating neural element transitions from off to on. They have exploited this feature to devise a learning algorithm, based on consistency techniques for discrete CSPs, that updates the network biases and connection strengths and thus improves the network performance
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
Graph Neural Networks Meet Neural-Symbolic Computing: A Survey and Perspective
Neural-symbolic computing has now become the subject of interest of both
academic and industry research laboratories. Graph Neural Networks (GNN) have
been widely used in relational and symbolic domains, with widespread
application of GNNs in combinatorial optimization, constraint satisfaction,
relational reasoning and other scientific domains. The need for improved
explainability, interpretability and trust of AI systems in general demands
principled methodologies, as suggested by neural-symbolic computing. In this
paper, we review the state-of-the-art on the use of GNNs as a model of
neural-symbolic computing. This includes the application of GNNs in several
domains as well as its relationship to current developments in neural-symbolic
computing.Comment: Updated version, draft of accepted IJCAI2020 Survey Pape
Rhythmic inhibition allows neural networks to search for maximally consistent states
Gamma-band rhythmic inhibition is a ubiquitous phenomenon in neural circuits
yet its computational role still remains elusive. We show that a model of
Gamma-band rhythmic inhibition allows networks of coupled cortical circuit
motifs to search for network configurations that best reconcile external inputs
with an internal consistency model encoded in the network connectivity. We show
that Hebbian plasticity allows the networks to learn the consistency model by
example. The search dynamics driven by rhythmic inhibition enable the described
networks to solve difficult constraint satisfaction problems without making
assumptions about the form of stochastic fluctuations in the network. We show
that the search dynamics are well approximated by a stochastic sampling
process. We use the described networks to reproduce perceptual multi-stability
phenomena with switching times that are a good match to experimental data and
show that they provide a general neural framework which can be used to model
other 'perceptual inference' phenomena
Reach-SDP: Reachability Analysis of Closed-Loop Systems with Neural Network Controllers via Semidefinite Programming
There has been an increasing interest in using neural networks in closed-loop
control systems to improve performance and reduce computational costs for
on-line implementation. However, providing safety and stability guarantees for
these systems is challenging due to the nonlinear and compositional structure
of neural networks. In this paper, we propose a novel forward reachability
analysis method for the safety verification of linear time-varying systems with
neural networks in feedback interconnection. Our technical approach relies on
abstracting the nonlinear activation functions by quadratic constraints, which
leads to an outer-approximation of forward reachable sets of the closed-loop
system. We show that we can compute these approximate reachable sets using
semidefinite programming. We illustrate our method in a quadrotor example, in
which we first approximate a nonlinear model predictive controller via a deep
neural network and then apply our analysis tool to certify finite-time
reachability and constraint satisfaction of the closed-loop system
Constraint satisfaction adaptive neural network and heuristics combined approaches for generalized job-shop scheduling
Copyright @ 2000 IEEEThis paper presents a constraint satisfaction adaptive neural network, together with several heuristics, to solve the generalized job-shop scheduling problem, one of NP-complete constraint satisfaction problems. The proposed neural network can be easily constructed and can adaptively adjust its weights of connections and biases of units based on the sequence and resource constraints of the job-shop scheduling problem during its processing. Several
heuristics that can be combined with the neural network are also presented. In the combined approaches, the neural network is used to obtain feasible solutions, the heuristic algorithms are used to improve
the performance of the neural network and the quality of the obtained solutions. Simulations have shown that the proposed
neural network and its combined approaches are efficient with respect to the quality of solutions and the solving speed.This work was supported by the Chinese National Natural Science Foundation under Grant 69684005 and the Chinese National High-Tech Program under Grant 863-511-9609-003, the EPSRC under Grant GR/L81468
Solving constraint-satisfaction problems with distributed neocortical-like neuronal networks
Finding actions that satisfy the constraints imposed by both external inputs
and internal representations is central to decision making. We demonstrate that
some important classes of constraint satisfaction problems (CSPs) can be solved
by networks composed of homogeneous cooperative-competitive modules that have
connectivity similar to motifs observed in the superficial layers of neocortex.
The winner-take-all modules are sparsely coupled by programming neurons that
embed the constraints onto the otherwise homogeneous modular computational
substrate. We show rules that embed any instance of the CSPs planar four-color
graph coloring, maximum independent set, and Sudoku on this substrate, and
provide mathematical proofs that guarantee these graph coloring problems will
convergence to a solution. The network is composed of non-saturating linear
threshold neurons. Their lack of right saturation allows the overall network to
explore the problem space driven through the unstable dynamics generated by
recurrent excitation. The direction of exploration is steered by the constraint
neurons. While many problems can be solved using only linear inhibitory
constraints, network performance on hard problems benefits significantly when
these negative constraints are implemented by non-linear multiplicative
inhibition. Overall, our results demonstrate the importance of instability
rather than stability in network computation, and also offer insight into the
computational role of dual inhibitory mechanisms in neural circuits.Comment: Accepted manuscript, in press, Neural Computation (2018
- âŠ