947 research outputs found

    Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks

    No full text
    The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a ‘self-modelling’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other ‘active linking’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks

    Social interaction as a heuristic for combinatorial optimization problems

    Full text link
    We investigate the performance of a variant of Axelrod's model for dissemination of culture - the Adaptive Culture Heuristic (ACH) - on solving an NP-Complete optimization problem, namely, the classification of binary input patterns of size FF by a Boolean Binary Perceptron. In this heuristic, NN agents, characterized by binary strings of length FF which represent possible solutions to the optimization problem, are fixed at the sites of a square lattice and interact with their nearest neighbors only. The interactions are such that the agents' strings (or cultures) become more similar to the low-cost strings of their neighbors resulting in the dissemination of these strings across the lattice. Eventually the dynamics freezes into a homogeneous absorbing configuration in which all agents exhibit identical solutions to the optimization problem. We find through extensive simulations that the probability of finding the optimal solution is a function of the reduced variable F/N1/4F/N^{1/4} so that the number of agents must increase with the fourth power of the problem size, N∝F4N \propto F^ 4, to guarantee a fixed probability of success. In this case, we find that the relaxation time to reach an absorbing configuration scales with F6F^ 6 which can be interpreted as the overall computational cost of the ACH to find an optimal set of weights for a Boolean Binary Perceptron, given a fixed probability of success

    Asymptotic behavior of memristive circuits

    Full text link
    The interest in memristors has risen due to their possible application both as memory units and as computational devices in combination with CMOS. This is in part due to their nonlinear dynamics, and a strong dependence on the circuit topology. We provide evidence that also purely memristive circuits can be employed for computational purposes. In the present paper we show that a polynomial Lyapunov function in the memory parameters exists for the case of DC controlled memristors. Such Lyapunov function can be asymptotically approximated with binary variables, and mapped to quadratic combinatorial optimization problems. This also shows a direct parallel between memristive circuits and the Hopfield-Little model. In the case of Erdos-Renyi random circuits, we show numerically that the distribution of the matrix elements of the projectors can be roughly approximated with a Gaussian distribution, and that it scales with the inverse square root of the number of elements. This provides an approximated but direct connection with the physics of disordered system and, in particular, of mean field spin glasses. Using this and the fact that the interaction is controlled by a projector operator on the loop space of the circuit. We estimate the number of stationary points of the approximate Lyapunov function and provide a scaling formula as an upper bound in terms of the circuit topology only.Comment: 20 pages, 8 figures; proofs corrected, figures changed; results substantially unchanged; to appear in Entrop

    Equilibrium Propagation: Bridging the Gap Between Energy-Based Models and Backpropagation

    Full text link
    We introduce Equilibrium Propagation, a learning framework for energy-based models. It involves only one kind of neural computation, performed in both the first phase (when the prediction is made) and the second phase of training (after the target or prediction error is revealed). Although this algorithm computes the gradient of an objective function just like Backpropagation, it does not need a special computation or circuit for the second phase, where errors are implicitly propagated. Equilibrium Propagation shares similarities with Contrastive Hebbian Learning and Contrastive Divergence while solving the theoretical issues of both algorithms: our algorithm computes the gradient of a well defined objective function. Because the objective function is defined in terms of local perturbations, the second phase of Equilibrium Propagation corresponds to only nudging the prediction (fixed point, or stationary distribution) towards a configuration that reduces prediction error. In the case of a recurrent multi-layer supervised network, the output units are slightly nudged towards their target in the second phase, and the perturbation introduced at the output layer propagates backward in the hidden layers. We show that the signal 'back-propagated' during this second phase corresponds to the propagation of error derivatives and encodes the gradient of the objective function, when the synaptic update corresponds to a standard form of spike-timing dependent plasticity. This work makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains, since leaky integrator neural computation performs both inference and error back-propagation in our model. The only local difference between the two phases is whether synaptic changes are allowed or not

    Investigation of automated task learning, decomposition and scheduling

    Get PDF
    The details and results of research conducted in the application of neural networks to task planning and decomposition are presented. Task planning and decomposition are operations that humans perform in a reasonably efficient manner. Without the use of good heuristics and usually much human interaction, automatic planners and decomposers generally do not perform well due to the intractable nature of the problems under consideration. The human-like performance of neural networks has shown promise for generating acceptable solutions to intractable problems such as planning and decomposition. This was the primary reasoning behind attempting the study. The basis for the work is the use of state machines to model tasks. State machine models provide a useful means for examining the structure of tasks since many formal techniques have been developed for their analysis and synthesis. It is the approach to integrate the strong algebraic foundations of state machines with the heretofore trial-and-error approach to neural network synthesis

    Robust Artificial Immune System in the Hopfield network for Maximum k-Satisfiability

    Get PDF
    Artificial Immune System (AIS) algorithm is a novel and vibrant computational paradigm, enthused by the biological immune system. Over the last few years, the artificial immune system has been sprouting to solve numerous computational and combinatorial optimization problems. In this paper, we introduce the restricted MAX-kSAT as a constraint optimization problem that can be solved by a robust computational technique. Hence, we will implement the artificial immune system algorithm incorporated with the Hopfield neural network to solve the restricted MAX-kSAT problem. The proposed paradigm will be compared with the traditional method, Brute force search algorithm integrated with Hopfield neural network. The results demonstrate that the artificial immune system integrated with Hopfield network outperforms the conventional Hopfield network in solving restricted MAX-kSAT. All in all, the result has provided a concrete evidence of the effectiveness of our proposed paradigm to be applied in other constraint optimization problem. The work presented here has many profound implications for future studies to counter the variety of satisfiability problem

    Genetic Algorithm for Restricted Maximum k-Satisfiability in the Hopfield Network

    Get PDF
    The restricted Maximum k-Satisfiability MAX- kSAT is an enhanced Boolean satisfiability counterpart that has attracted numerous amount of research. Genetic algorithm has been the prominent optimization heuristic algorithm to solve constraint optimization problem. The core motivation of this paper is to introduce Hopfield network incorporated with genetic algorithm in solving MAX-kSAT problem. Genetic algorithm will be integrated with Hopfield network as a single network. The proposed method will be compared with the conventional Hopfield network. The results demonstrate that Hopfield network with genetic algorithm outperforms conventional Hopfield networks. Furthermore, the outcome had provided a solid evidence of the robustness of our proposed algorithms to be used in other satisfiability problem
    • …
    corecore