196 research outputs found
Boolean Dynamics with Random Couplings
This paper reviews a class of generic dissipative dynamical systems called
N-K models. In these models, the dynamics of N elements, defined as Boolean
variables, develop step by step, clocked by a discrete time variable. Each of
the N Boolean elements at a given time is given a value which depends upon K
elements in the previous time step.
We review the work of many authors on the behavior of the models, looking
particularly at the structure and lengths of their cycles, the sizes of their
basins of attraction, and the flow of information through the systems. In the
limit of infinite N, there is a phase transition between a chaotic and an
ordered phase, with a critical phase in between.
We argue that the behavior of this system depends significantly on the
topology of the network connections. If the elements are placed upon a lattice
with dimension d, the system shows correlations related to the standard
percolation or directed percolation phase transition on such a lattice. On the
other hand, a very different behavior is seen in the Kauffman net in which all
spins are equally likely to be coupled to a given spin. In this situation,
coupling loops are mostly suppressed, and the behavior of the system is much
more like that of a mean field theory.
We also describe possible applications of the models to, for example, genetic
networks, cell differentiation, evolution, democracy in social systems and
neural networks.Comment: 69 pages, 16 figures, Submitted to Springer Applied Mathematical
Sciences Serie
Channel routing: Efficient solutions using neural networks
Neural network architectures are effectively applied to solve the channel routing problem. Algorithms for both two-layer and multilayer channel-width minimization, and constrained via minimization are proposed and implemented. Experimental results show that the proposed channel-width minimization algorithms are much superior in all respects compared to existing algorithms. The optimal two-layer solutions to most of the benchmark problems, not previously obtained, are obtained for the first time, including an optimal solution to the famous Deutch\u27s difficult problem. The optimal solution in four-layers for one of the be lchmark problems, not previously obtained, is obtained for the first time. Both convergence rate and the speed with which the simulations are executed are outstanding. A neural network solution to the constrained via minimization problem is also presented. In addition, a fast and simple linear-time algorithm is presented, possibly for the first time, for coloring of vertices of an interval graph, provided the line intervals are given
Toward a further understanding of object feature binding: a cognitive neuroscience perspective.
The aim of this thesis is to lead to a further understanding of the neural mechanisms underlying object feature binding in the human brain. The focus is on information processing and integration in the visual system and visual shortterm memory. From a review of the literature it is clear that there are three major
competing binding theories, however, none of these individually solves the binding problem satisfactorily. Thus the aim of this research is to conduct behavioural experimentation into object feature binding, paying particular attention to visual short-term memory.
The behavioural experiment was designed and conducted using a within-subjects delayed responset ask comprising a battery of sixty-four composite objects each with three features and four dimensions in each of three conditions (spatial, temporal and spatio-temporal).Findings from the experiment,which focus on spatial and temporal aspects of object feature binding and feature proximity on
binding errors, support the spatial theories on object feature binding, in addition we propose that temporal theories and convergence, through hierarchical feature
analysis, are also involved. Because spatial properties have a dedicated processing neural stream, and temporal properties rely on limited capacity memory systems, memories for sequential information would likely be more
difficult to accuratelyr ecall. Our study supports other studies which suggest that both spatial and temporal coherence to differing degrees,may be involved in
object feature binding. Traditionally, these theories have purported to provide individual solutions, but this thesis proposes a novel unified theory of object feature binding in which hierarchical feature analysis, spatial attention and temporal synchrony each plays a role. It is further proposed that binding takes place in visual short-term memory through concerted and integrated information
processing in distributed cortical areas. A cognitive model detailing this integrated proposal is given. Next, the cognitive model is used to inform the design and suggested implementation of a computational model which would be
able to test the theory put forward in this thesis. In order to verify the model, future work is needed to implement the computational model.Thus it is argued
that this doctoral thesis provides valuable experimental evidence concerning spatio-temporal aspects of the binding problem and as such is an additional building block in the quest for a solution to the object feature binding problem
Recommended from our members
Unconventional computing platforms and nature-inspired methods for solving hard optimisation problems
The search for novel hardware beyond the traditional von Neumann architecture has given rise to a modern area of unconventional computing requiring the efforts of mathematicians, physicists and engineers. Many analogue physical systems, including networks of nonlinear oscillators, lasers, condensates, and superconducting qubits, are proposed and realised to address challenging computational problems from various areas of social and physical sciences and technology. Understanding the underlying physical process by which the system finds the solutions to such problems often leads to new optimisation algorithms. This thesis focuses on studying gain-dissipative systems and nature-inspired algorithms that form a hybrid architecture that may soon rival classical hardware.
Chapter 1 lays the necessary foundation and explains various interdisciplinary terms that are used throughout the dissertation. In particular, connections between the optimisation problems and spin Hamiltonians are established, their computational complexity classes are explained, and the most prominent physical platforms for spin Hamiltonian implementation are reviewed.
Chapter 2 demonstrates a large variety of behaviours encapsulated in networks of polariton condensates, which are a vivid example of a gain-dissipative system we use throughout the thesis. We explain how the variations of experimentally tunable parameters allow the networks of polariton condensates to represent different oscillator models. We derive analytic expressions for the interactions between two spatially separated polariton condensates and show various synchronisation regimes for periodic chains of condensates. An odd number of condensates at the vertices of a regular polygon leads to a spontaneous formation of a giant multiply-quantised vortex at the centre of a polygon. Numerical simulations of all studied configurations of polariton condensates are performed with a mean-field approach with some theoretically proposed physical phenomena supported by the relevant experiments.
Chapter 3 examines the potential of polariton graphs to find the low-energy minima of the spin Hamiltonians. By associating a spin with a condensate phase, the minima of the XY model are achieved for simple configurations of spatially-interacting polariton condensates. We argue that such implementation of gain-dissipative simulators limits their applicability to the classes of easily solvable problems since the parameters of a particular Hamiltonian depend on the node occupancies that are not known a priori. To overcome this difficulty, we propose to adjust pumping intensities and coupling strengths dynamically. We further theoretically suggest how the discrete Ising and -state planar Potts models with or without external fields can be simulated using gain-dissipative platforms. The underlying operational principle originates from a combination of resonant and non-resonant pumping. Spatial anisotropy of pump and dissipation profiles enables an effective control of the sign and intensity of the coupling strength between any two neighbouring sites, which we demonstrate with a two dimensional square lattice of polariton condensates. For an accurate minimisation of discrete and continuous spin Hamiltonians, we propose a fully controllable polaritonic XY-Ising machine based on a network of geometrically isolated polariton condensates.
In Chapter 4, we look at classical computing rivals and study nature-inspired methods for optimising spin Hamiltonians. Based on the operational principles of gain-dissipative machines, we develop a novel class of gain-dissipative algorithms for the optimisation of discrete and continuous problems and show its performance in comparison with traditional optimisation techniques. Besides looking at traditional heuristic methods for Ising minimisation, such as the Hopfield-Tank neural networks and parallel tempering, we consider a recent physics-inspired algorithm, namely chaotic amplitude control, and exact commercial solver, Gurobi. For a proper evaluation of physical simulators, we further discuss the importance of detecting easy instances of hard combinatorial optimisation problems. The Ising model for certain interaction matrices, that are commonly used for evaluating the performance of unconventional computing machines and assumed to be exponentially hard, is shown to be solvable in polynomial time including the Mobius ladder graphs and Mattis spin glasses.
In Chapter 5 we discuss possible future applications of unconventional computing platforms including emulation of search algorithms such as PageRank, realisation of a proof-of-work protocol for blockchain technology, and reservoir computing
Functional Role of Critical Dynamics in Flexible Visual Information Processing
Recent experimental and theoretical work has established the hypothesis that cortical neurons operate close to a critical state which signifies a phase transition from chaotic to ordered dynamics. Critical dynamics are suggested to optimize several aspects of neuronal information processing. However, although signatures of critical dynamics have been demonstrated in recordings of spontaneously active cortical neurons, little is known about how these dynamics are affected by task-dependent changes in neuronal activity when the cortex is engaged in stimulus processing. In fact, some in vivo investigations of the awake and active cortex report either an absence of signatures of criticality or relatively weak ones. In addition, the functional role of criticality in optimizing computation is often reported in abstract theoretical studies, adopting minimalistic models with homogeneous topology and slowly-driven networks. Consequently, there is a lack of concrete links between information theoretical benefits of the critical state and neuronal networks performing a behaviourally relevant task. In this thesis we explore such concrete links by focusing on the visual system, which needs to meet major computational challenges on a daily basis. Among others, the visual system is responsible for the rapid integration of relevant information from a large number of single channels, and in a flexible manner depending on the behavioral and environmental contexts. We postulate that critical neuronal dynamics in the form of cascades of activity spanning large populations of neurons may support such quick and complex computations. Specifically, we consider two notable examples of well-known phenomena in visual information processing: First the enhancement of object discriminability under selective attention, and second, a feature integration and figure-ground segregation scenario. In the first example, we model the top-down modulation of the activity of visuocortical neurons in order to selectively improve the processing of an attended region in a visual scene. In the second example, we model how neuronal activity may be modulated in a bottom-up fashion by the properties of the visual stimulus itself, which makes it possible to perceive different shapes and objects. We find in both scenarios that the task performance may be improved by employing critical networks. In addition, we suggest that the specific task- or stimulus-dependent modulations of information processing may be optimally supported by the tuning of relevant local neuronal networks towards or away from the critical point. Thus, the relevance of this dissertation is summarized by the following points: We formally extend the existing models of criticality to inhomogeneous systems subject to a strong external drive. We present concrete functional benefits for networks operating near the critical point in well-known experimental paradigms. Importantly, we find emergent critical dynamics only in the parts of the network which are processing the behaviourally relevant information. We suggest that the implied locality of critical dynamics in space and time may help explain why some studies report no signatures of criticality in the active cortex
Recurrent neural network for optimization with application to computer vision.
by Cheung Kwok-wai.Thesis (M.Phil.)--Chinese University of Hong Kong, 1993.Includes bibliographical references (leaves [146-154]).Chapter Chapter 1 --- IntroductionChapter 1.1 --- Programmed computing vs. neurocomputing --- p.1-1Chapter 1.2 --- Development of neural networks - feedforward and feedback models --- p.1-2Chapter 1.3 --- State of art of applying recurrent neural network towards computer vision problem --- p.1-3Chapter 1.4 --- Objective of the Research --- p.1-6Chapter 1.5 --- Plan of the thesis --- p.1-7Chapter Chapter 2 --- BackgroundChapter 2.1 --- Short history on development of Hopfield-like neural network --- p.2-1Chapter 2.2 --- Hopfield network model --- p.2-3Chapter 2.2.1 --- Neuron's transfer function --- p.2-3Chapter 2.2.2 --- Updating sequence --- p.2-6Chapter 2.3 --- Hopfield energy function and network convergence properties --- p.2-1Chapter 2.4 --- Generalized Hopfield network --- p.2-13Chapter 2.4.1 --- Network order and generalized Hopfield network --- p.2-13Chapter 2.4.2 --- Associated energy function and network convergence property --- p.2-13Chapter 2.4.3 --- Hardware implementation consideration --- p.2-15Chapter Chapter 3 --- Recurrent neural network for optimizationChapter 3.1 --- Mapping to Neural Network formulation --- p.3-1Chapter 3.2 --- Network stability verse Self-reinforcement --- p.3-5Chapter 3.2.1 --- Quadratic problem and Hopfield network --- p.3-6Chapter 3.2.2 --- Higher-order case and reshaping strategy --- p.3-8Chapter 3.2.3 --- Numerical Example --- p.3-10Chapter 3.3 --- Local minimum limitation and existing solutions in the literature --- p.3-12Chapter 3.3.1 --- Simulated Annealing --- p.3-13Chapter 3.3.2 --- Mean Field Annealing --- p.3-15Chapter 3.3.3 --- Adaptively changing neural network --- p.3-16Chapter 3.3.4 --- Correcting Current Method --- p.3-16Chapter 3.4 --- Conclusions --- p.3-17Chapter Chapter 4 --- A Novel Neural Network for Global Optimization - Tunneling NetworkChapter 4.1 --- Tunneling Algorithm --- p.4-1Chapter 4.1.1 --- Description of Tunneling Algorithm --- p.4-1Chapter 4.1.2 --- Tunneling Phase --- p.4-2Chapter 4.2 --- A Neural Network with tunneling capability Tunneling network --- p.4-8Chapter 4.2.1 --- Network Specifications --- p.4-8Chapter 4.2.2 --- Tunneling function for Hopfield network and the corresponding updating rule --- p.4-9Chapter 4.3 --- Tunneling network stability and global convergence property --- p.4-12Chapter 4.3.1 --- Tunneling network stability --- p.4-12Chapter 4.3.2 --- Global convergence property --- p.4-15Chapter 4.3.2.1 --- Markov chain model for Hopfield network --- p.4-15Chapter 4.3.2.2 --- Classification of the Hopfield markov chain --- p.4-16Chapter 4.3.2.3 --- Markov chain model for tunneling network and its convergence towards global minimum --- p.4-18Chapter 4.3.3 --- Variation of pole strength and its effect --- p.4-20Chapter 4.3.3.1 --- Energy Profile analysis --- p.4-21Chapter 4.3.3.2 --- Size of attractive basin and pole strength required --- p.4-24Chapter 4.3.3.3 --- A new type of pole eases the implementation problem --- p.4-30Chapter 4.4 --- Simulation Results and Performance comparison --- p.4-31Chapter 4.4.1 --- Simulation Experiments --- p.4-32Chapter 4.4.2 --- Simulation Results and Discussions --- p.4-37Chapter 4.4.2.1 --- Comparisons on optimal path obtained and the convergence rate --- p.4-37Chapter 4.4.2.2 --- On decomposition of Tunneling network --- p.4-38Chapter 4.5 --- Suggested hardware implementation of Tunneling network --- p.4-48Chapter 4.5.1 --- Tunneling network hardware implementation --- p.4-48Chapter 4.5.2 --- Alternative implementation theory --- p.4-52Chapter 4.6 --- Conclusions --- p.4-54Chapter Chapter 5 --- Recurrent Neural Network for Gaussian FilteringChapter 5.1 --- Introduction --- p.5-1Chapter 5.1.1 --- Silicon Retina --- p.5-3Chapter 5.1.2 --- An Active Resistor Network for Gaussian Filtering of Image --- p.5-5Chapter 5.1.3 --- Motivations of using recurrent neural network --- p.5-7Chapter 5.1.4 --- Difference between the active resistor network model and recurrent neural network model for gaussian filtering --- p.5-8Chapter 5.2 --- From Problem formulation to Neural Network formulation --- p.5-9Chapter 5.2.1 --- One Dimensional Case --- p.5-9Chapter 5.2.2 --- Two Dimensional Case --- p.5-13Chapter 5.3 --- Simulation Results and Discussions --- p.5-14Chapter 5.3.1 --- Spatial impulse response of the 1-D network --- p.5-14Chapter 5.3.2 --- Filtering property of the 1-D network --- p.5-14Chapter 5.3.3 --- Spatial impulse response of the 2-D network and some filtering results --- p.5-15Chapter 5.4 --- Conclusions --- p.5-16Chapter Chapter 6 --- Recurrent Neural Network for Boundary DetectionChapter 6.1 --- Introduction --- p.6-1Chapter 6.2 --- From Problem formulation to Neural Network formulation --- p.6-3Chapter 6.2.1 --- Problem Formulation --- p.6-3Chapter 6.2.2 --- Recurrent Neural Network Model used --- p.6-4Chapter 6.2.3 --- Neural Network formulation --- p.6-5Chapter 6.3 --- Simulation Results and Discussions --- p.6-7Chapter 6.3.1 --- Feasibility study and Performance comparison --- p.6-7Chapter 6.3.2 --- Smoothing and Boundary Detection --- p.6-9Chapter 6.3.3 --- Convergence improvement by network decomposition --- p.6-10Chapter 6.3.4 --- Hardware implementation consideration --- p.6-10Chapter 6.4 --- Conclusions --- p.6-11Chapter Chapter 7 --- Conclusions and Future ResearchesChapter 7.1 --- Contributions and Conclusions --- p.7-1Chapter 7.2 --- Limitations and Suggested Future Researches --- p.7-3References --- p.R-lAppendix I The assignment of the boundary connection of 2-D recurrent neural network for gaussian filtering --- p.Al-1Appendix II Formula for connection weight assignment of 2-D recurrent neural network for gaussian filtering and the proof on symmetric property --- p.A2-1Appendix III Details on reshaping strategy --- p.A3-
- …