514 research outputs found
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
The Performance of Associative Memory Models with Biologically Inspired Connectivity
This thesis is concerned with one important question in artificial neural networks, that is, how biologically inspired connectivity of a network affects its associative memory performance.
In recent years, research on the mammalian cerebral cortex, which has the main
responsibility for the associative memory function in the brains, suggests that
the connectivity of this cortical network is far from fully connected, which is
commonly assumed in traditional associative memory models. It is found to
be a sparse network with interesting connectivity characteristics such as the
“small world network” characteristics, represented by short Mean Path Length,
high Clustering Coefficient, and high Global and Local Efficiency. Most of the networks in this thesis are therefore sparsely connected.
There is, however, no conclusive evidence of how these different connectivity
characteristics affect the associative memory performance of a network. This
thesis addresses this question using networks with different types of
connectivity, which are inspired from biological evidences.
The findings of this programme are unexpected and important. Results show
that the performance of a non-spiking associative memory model is found to be
predicted by its linear correlation with the Clustering Coefficient of the network,
regardless of the detailed connectivity patterns. This is particularly important
because the Clustering Coefficient is a static measure of one aspect of
connectivity, whilst the associative memory performance reflects the result of a
complex dynamic process.
On the other hand, this research reveals that improvements in the performance
of a network do not necessarily directly rely on an increase in the network’s
wiring cost. Therefore it is possible to construct networks with high
associative memory performance but relatively low wiring cost. Particularly,
Gaussian distributed connectivity in a network is found to achieve the best
performance with the lowest wiring cost, in all examined connectivity models.
Our results from this programme also suggest that a modular network with an
appropriate configuration of Gaussian distributed connectivity, both internal to
each module and across modules, can perform nearly as well as the Gaussian
distributed non-modular network.
Finally, a comparison between non-spiking and spiking associative memory
models suggests that in terms of associative memory performance, the
implication of connectivity seems to transcend the details of the actual neural
models, that is, whether they are spiking or non-spiking neurons
Design of Oscillatory Neural Networks by Machine Learning
We demonstrate the utility of machine learning algorithms for the design of
Oscillatory Neural Networks (ONNs). After constructing a circuit model of the
oscillators in a machine-learning-enabled simulator and performing
Backpropagation through time (BPTT) for determining the coupling resistances
between the ring oscillators, we show the design of associative memories and
multi-layered ONN classifiers. The machine-learning-designed ONNs show superior
performance compared to other design methods (such as Hebbian learning) and
they also enable significant simplifications in the circuit topology. We
demonstrate the design of multi-layered ONNs that show superior performance
compared to single-layer ones. We argue Machine learning can unlock the true
computing potential of ONNs hardware
Echo State Queueing Network: a new reservoir computing learning tool
In the last decade, a new computational paradigm was introduced in the field
of Machine Learning, under the name of Reservoir Computing (RC). RC models are
neural networks which a recurrent part (the reservoir) that does not
participate in the learning process, and the rest of the system where no
recurrence (no neural circuit) occurs. This approach has grown rapidly due to
its success in solving learning tasks and other computational applications.
Some success was also observed with another recently proposed neural network
designed using Queueing Theory, the Random Neural Network (RandNN). Both
approaches have good properties and identified drawbacks. In this paper, we
propose a new RC model called Echo State Queueing Network (ESQN), where we use
ideas coming from RandNNs for the design of the reservoir. ESQNs consist in
ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The
paper positions ESQNs in the global Machine Learning area, and provides
examples of their use and performances. We show on largely used benchmarks that
ESQNs are very accurate tools, and we illustrate how they compare with standard
ESNs.Comment: Proceedings of the 10th IEEE Consumer Communications and Networking
Conference (CCNC), Las Vegas, USA, 201
Hardware Architectures and Implementations for Associative Memories : the Building Blocks of Hierarchically Distributed Memories
During the past several decades, the semiconductor industry has grown into a global industry with revenues around $300 billion. Intel no longer relies on only transistor scaling for higher CPU performance, but instead, focuses more on multiple cores on a single die. It has been projected that in 2016 most CMOS circuits will be manufactured with 22 nm process. The CMOS circuits will have a large number of defects. Especially when the transistor goes below sub-micron, the original deterministic circuits will start having probabilistic characteristics. Hence, it would be challenging to map traditional computational models onto probabilistic circuits, suggesting a need for fault-tolerant computational algorithms. Biologically inspired algorithms, or associative memories (AMs)—the building blocks of cortical hierarchically distributed memories (HDMs) discussed in this dissertation, exhibit a remarkable match to the nano-scale electronics, besides having great fault-tolerance ability. Research on the potential mapping of the HDM onto CMOL (hybrid CMOS/nanoelectronic circuits) nanogrids provides useful insight into the development of non-von Neumann neuromorphic architectures and semiconductor industry. In this dissertation, we investigated the implementations of AMs on different hardware platforms, including microprocessor based personal computer (PC), PC cluster, field programmable gate arrays (FPGA), CMOS, and CMOL nanogrids.
We studied two types of neural associative memory models, with and without temporal information. In this research, we first decomposed the computational models into basic and common operations, such as matrix-vector inner-product and k-winners-take-all (k-WTA). We then analyzed the baseline performance/price ratio of implementing the AMs with a PC. We continued with a similar performance/price analysis of the implementations on more parallel hardware platforms, such as PC cluster and FPGA. However, the majority of the research emphasized on the implementations with all digital and mixed-signal full-custom CMOS and CMOL nanogrids.
In this dissertation, we draw the conclusion that the mixed-signal CMOL nanogrids exhibit the best performance/price ratio over other hardware platforms. We also highlighted some of the trade-offs between dedicated and virtualized hardware circuits for the HDM models. A simple time-multiplexing scheme for the digital CMOS implementations can achieve comparable throughput as the mixed-signal CMOL nanogrids
Power System Load Modeling Using A Weighted Optimal Linear Associative Memory (Olam)
Power system load models are very powerful tools, which have a wide range of applications in the electric power industry. These uses include scheduling system maintenance, monitoring load management policies, helping with the generator commitment problem by providing short-term forecasts, and aiding system planning [4]. Further, Power System Load Modeling is a technique used to model a power system and other essentials for the assessment of stability. In today’s datacenters, power consumption is a major issue. Storage usually typically comprises a large percentage of a datacenter’s power. Therefore, without mentioning that managing, understanding, and reducing storage, power consumption is an essential aspect of any efforts that address the total power consumption of datacenters. Moreover, according to [16], power system load models have a wide range of applications in the electric power industry including load management policy monitoring, such as aiding with system planning by providing long-term forecasts, short-term forecasts, and others including assisting with the generator commitment problem
- …