1,139 research outputs found
Transformations in the Scale of Behaviour and the Global Optimisation of Constraints in Adaptive Networks
The natural energy minimisation behaviour of a dynamical system can be interpreted as a simple optimisation process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge; not one amenable to the spontaneous energy minimisation behaviour of a natural dynamical system. However, in this paper we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organisation. We use a ‘self-modelling’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimisation behaviour of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully-distributed positive feedback mechanisms that are relevant to other ‘active linking’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behaviour in various non-neural adaptive networks such as social, genetic and ecological networks
A study of pattern recovery in recurrent correlation associative memories
In this paper, we analyze the recurrent correlation associative memory (RCAM) model of Chiueh and Goodman. This is an associative memory in which stored binary memory patterns are recalled via an iterative update rule. The update of the individual pattern-bits is controlled by an excitation function, which takes as its arguement the inner product between the stored memory patterns and the input patterns. Our contribution is to analyze the dynamics of pattern recall when the input patterns are corrupted by noise of a relatively unrestricted class. We make three contributions. First, we show how to identify the excitation function which maximizes the separation (the Fisher discriminant) between the uncorrupted realization of the noisy input pattern and the remaining patterns residing in the memory. Moreover, we show that the excitation function which gives maximum separation is exponential when the input bit-errors follow a binomial distribution. Our second contribution is to develop an expression for the expectation value of bit-error probability on the input pattern after one iteration. We show how to identify the excitation function which minimizes the bit-error probability. However, there is no closed-form solution and the excitation function must be recovered numerically. The relationship between the excitation functions which result from the two different approaches is examined for a binomial distribution of bit-errors. The final contribution is to develop a semiempirical approach to the modeling of the dynamics of the RCAM. This provides us with a numerical means of predicting the recall error rate of the memory. It also allows us to develop an expression for the storage capacity for a given recall error rate
Genetic Algorithm for Restricted Maximum k-Satisfiability in the Hopfield Network
The restricted Maximum k-Satisfiability MAX- kSAT is an enhanced Boolean satisfiability counterpart that has attracted numerous amount of research. Genetic algorithm has been the prominent optimization heuristic algorithm to solve constraint optimization problem. The core motivation of this paper is to introduce Hopfield network incorporated with genetic algorithm in solving MAX-kSAT problem. Genetic algorithm will be integrated with Hopfield network as a single network. The proposed method will be compared with the conventional Hopfield network. The results demonstrate that Hopfield network with genetic algorithm outperforms conventional Hopfield networks. Furthermore, the outcome had provided a solid evidence of the robustness of our proposed algorithms to be used in other satisfiability problem
Robust Artificial Immune System in the Hopfield network for Maximum k-Satisfiability
Artificial Immune System (AIS) algorithm is a novel and vibrant computational paradigm, enthused by the biological immune system. Over the last few years, the artificial immune system has been sprouting to solve numerous computational and combinatorial optimization problems. In this paper, we introduce the restricted MAX-kSAT as a constraint optimization problem that can be solved by a robust computational technique. Hence, we will implement the artificial immune system algorithm incorporated with the Hopfield neural network to solve the restricted MAX-kSAT problem. The proposed paradigm will be compared with the traditional method, Brute force search algorithm integrated with Hopfield neural network. The results demonstrate that the artificial immune system integrated with Hopfield network outperforms the conventional Hopfield network in solving restricted MAX-kSAT. All in all, the result has provided a concrete evidence of the effectiveness of our proposed paradigm to be applied in other constraint optimization problem. The work presented here has many profound implications for future studies to counter the variety of satisfiability problem
Memory formation in matter
Memory formation in matter is a theme of broad intellectual relevance; it
sits at the interdisciplinary crossroads of physics, biology, chemistry, and
computer science. Memory connotes the ability to encode, access, and erase
signatures of past history in the state of a system. Once the system has
completely relaxed to thermal equilibrium, it is no longer able to recall
aspects of its evolution. Memory of initial conditions or previous training
protocols will be lost. Thus many forms of memory are intrinsically tied to
far-from-equilibrium behavior and to transient response to a perturbation. This
general behavior arises in diverse contexts in condensed matter physics and
materials: phase change memory, shape memory, echoes, memory effects in
glasses, return-point memory in disordered magnets, as well as related contexts
in computer science. Yet, as opposed to the situation in biology, there is
currently no common categorization and description of the memory behavior that
appears to be prevalent throughout condensed-matter systems. Here we focus on
material memories. We will describe the basic phenomenology of a few of the
known behaviors that can be understood as constituting a memory. We hope that
this will be a guide towards developing the unifying conceptual underpinnings
for a broad understanding of memory effects that appear in materials
An examination and analysis of the Boltzmann machine, its mean field theory approximation, and learning algorithm
It is currently believed that artificial neural network models may form the basis for inte1ligent computational devices. The Boltzmann Machine belongs to the class of recursive artificial neural networks and uses a supervised learning algorithm to learn the mapping between input vectors and desired outputs. This study examines the parameters that influence the performance of the Boltzmann Machine learning algorithm. Improving the performance of the algorithm through the use of a naïve mean field theory approximation is also examined. The study was initiated to examine the hypothesis that the Boltzmann Machine learning algorithm, when used with the mean field approximation, is an efficient, reliable, and flexible model of machine learning. An empirical analysis of the performance of the algorithm supports this hypothesis. The performance of the algorithm is investigated by applying it to training the Boltzmann Machine, and its mean field approximation, the exclusive-Or function. Simulation results suggest that the mean field theory approximation learns faster than the Boltzmann Machine, and shows better stability. The size of the network and the learning rate were found to have considerable impact upon the performance of the algorithm, especially in the case of the mean field theory approximation. A comparison is made with the feed forward back propagation paradigm and it is found that the back propagation network learns the exclusive-Or function eight times faster than the mean field approximation. However, the mean field approximation demonstrated better reliability and stability. Because the mean field approximation is local and asynchronous it has an advantage over back propagation with regard to a parallel implementation. The mean field approximation is domain independent and structurally flexible. These features make the network suitable for use with a structural adaption algorithm, allowing the network to modify its architecture in response to the external environment
Principles of Neuromorphic Photonics
In an age overrun with information, the ability to process reams of data has
become crucial. The demand for data will continue to grow as smart gadgets
multiply and become increasingly integrated into our daily lives.
Next-generation industries in artificial intelligence services and
high-performance computing are so far supported by microelectronic platforms.
These data-intensive enterprises rely on continual improvements in hardware.
Their prospects are running up against a stark reality: conventional
one-size-fits-all solutions offered by digital electronics can no longer
satisfy this need, as Moore's law (exponential hardware scaling),
interconnection density, and the von Neumann architecture reach their limits.
With its superior speed and reconfigurability, analog photonics can provide
some relief to these problems; however, complex applications of analog
photonics have remained largely unexplored due to the absence of a robust
photonic integration industry. Recently, the landscape for
commercially-manufacturable photonic chips has been changing rapidly and now
promises to achieve economies of scale previously enjoyed solely by
microelectronics.
The scientific community has set out to build bridges between the domains of
photonic device physics and neural networks, giving rise to the field of
\emph{neuromorphic photonics}. This article reviews the recent progress in
integrated neuromorphic photonics. We provide an overview of neuromorphic
computing, discuss the associated technology (microelectronic and photonic)
platforms and compare their metric performance. We discuss photonic neural
network approaches and challenges for integrated neuromorphic photonic
processors while providing an in-depth description of photonic neurons and a
candidate interconnection architecture. We conclude with a future outlook of
neuro-inspired photonic processing.Comment: 28 pages, 19 figure
A ferrofluid based neural network: design of an analogue associative memory
We analyse an associative memory based on a ferrofluid, consisting of a
system of magnetic nano-particles suspended in a carrier fluid of variable
viscosity subject to patterns of magnetic fields from an array of input and
output magnetic pads. The association relies on forming patterns in the
ferrofluid during a trainingdphase, in which the magnetic dipoles are free to
move and rotate to minimize the total energy of the system. Once equilibrated
in energy for a given input-output magnetic field pattern-pair the particles
are fully or partially immobilized by cooling the carrier liquid. Thus produced
particle distributions control the memory states, which are read out
magnetically using spin-valve sensors incorporated in the output pads. The
actual memory consists of spin distributions that is dynamic in nature,
realized only in response to the input patterns that the system has been
trained for. Two training algorithms for storing multiple patterns are
investigated. Using Monte Carlo simulations of the physical system we
demonstrate that the device is capable of storing and recalling two sets of
images, each with an accuracy approaching 100%.Comment: submitted to Neural Network
- …