1,870 research outputs found
Adiabatic Quantum Optimization for Associative Memory Recall
Hopfield networks are a variant of associative memory that recall information
stored in the couplings of an Ising model. Stored memories are fixed points for
the network dynamics that correspond to energetic minima of the spin state. We
formulate the recall of memories stored in a Hopfield network using energy
minimization by adiabatic quantum optimization (AQO). Numerical simulations of
the quantum dynamics allow us to quantify the AQO recall accuracy with respect
to the number of stored memories and the noise in the input key. We also
investigate AQO performance with respect to how memories are stored in the
Ising model using different learning rules. Our results indicate that AQO
performance varies strongly with learning rule due to the changes in the energy
landscape. Consequently, learning rules offer indirect methods for
investigating change to the computational complexity of the recall task and the
computational efficiency of AQO.Comment: 22 pages, 11 figures. Updated for clarity and figures, to appear in
Frontiers of Physic
New Insights on Learning Rules for Hopfield Networks: Memory and Objective Function Minimisation
Hopfield neural networks are a possible basis for modelling associative
memory in living organisms. After summarising previous studies in the field, we
take a new look at learning rules, exhibiting them as descent-type algorithms
for various cost functions. We also propose several new cost functions suitable
for learning. We discuss the role of biases (the external inputs) in the
learning process in Hopfield networks. Furthermore, we apply Newtons method for
learning memories, and experimentally compare the performances of various
learning rules. Finally, to add to the debate whether allowing connections of a
neuron to itself enhances memory capacity, we numerically investigate the
effects of self coupling.
Keywords: Hopfield Networks, associative memory, content addressable memory,
learning rules, gradient descent, attractor networksComment: 8 pages, IEEE-Xplore, 2020 International Joint Conference on Neural
Networks (IJCNN), Glasgo
Implementation of dynamical systems with plastic self-organising velocity fields
To describe learning, as an alternative to a neural network recently dynamical systems
were introduced whose vector fields were plastic and self-organising. Such a system
automatically modifies its velocity vector field in response to the external stimuli. In
the simplest case under certain conditions its vector field develops into a gradient
of a multi-dimensional probability density distribution of the stimuli. We illustrate
with examples how such a system carries out categorisation, pattern recognition,
memorisation and forgetting without any supervision. [Continues.
Improving Associative Memory in a Network of Spiking Neurons
In this thesis we use computational neural network models to examine the dynamics and functionality of the CA3 region of the mammalian hippocampus. The emphasis of the project is to investigate how the dynamic control structures provided by inhibitory circuitry and cellular modification may effect the CA3 region during the recall of previously stored information. The CA3 region is commonly thought to work as a recurrent auto-associative neural network due to the neurophysiological characteristics found, such as, recurrent collaterals, strong and sparse synapses from external inputs and plasticity between coactive cells. Associative memory models have been developed using various configurations of mathematical artificial neural networks which were first developed over 40 years ago. Within these models we can store information via changes in the strength of connections between simplified model neurons (two-state). These memories can be recalled when a cue (noisy or partial) is instantiated upon the net. The type of information they can store is quite limited due to restrictions caused by the simplicity of the hard-limiting nodes which are commonly associated with a binary activation threshold. We build a much more biologically plausible model with complex spiking cell models and with realistic synaptic properties between cells. This model is based upon some of the many details we now know of the neuronal circuitry of the CA3 region. We implemented the model in computer software using Neuron and Matlab and tested it by running simulations of storage and recall in the network. By building this model we gain new insights into how different types of neurons, and the complex circuits they form, actually work.
The mammalian brain consists of complex resistive-capacative electrical circuitry which is formed by the interconnection of large numbers of neurons. A principal cell type is the pyramidal cell within the cortex, which is the main information processor in our neural networks. Pyramidal cells are surrounded by diverse populations of interneurons which have proportionally smaller numbers compared to the pyramidal cells and these form connections with pyramidal cells and other inhibitory cells. By building detailed computational models of recurrent neural circuitry we explore how these microcircuits of interneurons control the flow of information through pyramidal cells and regulate the efficacy of the network. We also explore the effect of cellular modification due to neuronal activity and the effect of incorporating spatially dependent connectivity on the network during recall of previously stored information.
In particular we implement a spiking neural network proposed by Sommer and Wennekers (2001). We consider methods for improving associative memory recall using methods inspired by the work by Graham and Willshaw (1995) where they apply mathematical transforms to an artificial neural network to improve the recall quality within the network. The networks tested contain either 100 or 1000 pyramidal cells with 10% connectivity applied and a partial cue instantiated, and with a global pseudo-inhibition.We investigate three methods. Firstly, applying localised disynaptic inhibition which will proportionalise the excitatory post synaptic potentials and provide a fast acting reversal potential which should help to reduce the variability in signal propagation between cells and provide further inhibition to help synchronise the network activity. Secondly, implementing a persistent sodium channel to the cell body which will act to non-linearise the activation threshold where after a given membrane potential the amplitude of the excitatory postsynaptic potential (EPSP) is boosted to push cells which receive slightly more excitation (most likely high units) over the firing threshold. Finally, implementing spatial characteristics of the dendritic tree will allow a greater probability of a modified synapse existing after 10% random connectivity has been applied throughout the network. We apply spatial characteristics by scaling the conductance weights of excitatory synapses which simulate the loss in potential in synapses found in the outer dendritic regions due to increased resistance.
To further increase the biological plausibility of the network we remove the pseudo-inhibition and apply realistic basket cell models with differing configurations for a global inhibitory circuit. The networks are configured with; 1 single basket cell providing feedback inhibition, 10% basket cells providing feedback inhibition where 10 pyramidal cells connect to each basket cell and finally, 100% basket cells providing feedback inhibition. These networks are compared and contrasted for efficacy on recall quality and the effect on the network behaviour. We have found promising results from applying biologically plausible recall strategies and network configurations which suggests the role of inhibition and cellular dynamics are pivotal in learning and memory
Global adaptation in networks of selfish components: emergent associative memory at the system scale
In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organise into structures that enhance global adaptation, efficiency or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalisation and optimisation, are well-understood. Such global functions within a single agent or organism are not wholly surprising since the mechanisms (e.g. Hebbian learning) that create these neural organisations may be selected for this purpose, but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviours when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully-distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g. when they can influence which other agents they interact with) then, in adapting these inter-agent relationships to maximise their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviours as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalise by idealising stored patterns and/or creating new combinations of sub-patterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviours in the same sense, and by the same mechanism, as the organisational principles familiar in connectionist models of organismic learning
Towards a continuous dynamic model of the Hopfield theory on neuronal interaction and memory storage
The purpose of this work is to study the Hopfield model for neuronal
interaction and memory storage, in particular the convergence to the stored
patterns. Since the hypothesis of symmetric synapses is not true for the
brain, we will study how we can extend it to the case of asymmetric
synapses using a probabilistic approach. We then focus on the description
of another feature of the memory process and brain: oscillations. Using the
Kuramoto model we will be able to describe them completely, gaining the
presence of synchronization between neurons. Our aim is therefore to
understand how and why neurons can be seen as oscillators and to establish
a strong link between this model and the Hopfield approach
Memory formation in matter
Memory formation in matter is a theme of broad intellectual relevance; it
sits at the interdisciplinary crossroads of physics, biology, chemistry, and
computer science. Memory connotes the ability to encode, access, and erase
signatures of past history in the state of a system. Once the system has
completely relaxed to thermal equilibrium, it is no longer able to recall
aspects of its evolution. Memory of initial conditions or previous training
protocols will be lost. Thus many forms of memory are intrinsically tied to
far-from-equilibrium behavior and to transient response to a perturbation. This
general behavior arises in diverse contexts in condensed matter physics and
materials: phase change memory, shape memory, echoes, memory effects in
glasses, return-point memory in disordered magnets, as well as related contexts
in computer science. Yet, as opposed to the situation in biology, there is
currently no common categorization and description of the memory behavior that
appears to be prevalent throughout condensed-matter systems. Here we focus on
material memories. We will describe the basic phenomenology of a few of the
known behaviors that can be understood as constituting a memory. We hope that
this will be a guide towards developing the unifying conceptual underpinnings
for a broad understanding of memory effects that appear in materials
- âŠ