8,763 research outputs found
A Morphological Associative Memory Employing A Stored Pattern Independent Kernel Image and Its Hardware Model
An associative memory provides a convenient way for pattern retrieval and restoration, which has an important role for handling data distorted with noise. As an effective associative memory, we paid attention to a morphological associative memory (MAM) proposed by Ritter. The model is superior to ordinary associative memory models in terms of calculation amount, memory capacity, and perfect recall rate. However, in general, the kernel design becomes difficult as the stored pattern increases because the kernel uses a part of each stored pattern. In this paper, we propose a stored pattern independent kernel design method for the MAM and design the MAM employing the proposed kernel design with a standard digital manner in parallel architecture for acceleration. We confirm the validity of the proposed kernel design method by auto- and hetero-association experiments and investigate the efficiency of the hardware acceleration. A high-speed operation (more than 150 times in comparison with software execution) is achieved in the custom hardware. The proposed model works as an intelligent pre-processor for the Brain-Inspired Systems (Brain-IS) working in real world
The Hopfield model and its role in the development of synthetic biology
Neural network models make extensive use of
concepts coming from physics and engineering. How do scientists
justify the use of these concepts in the representation of
biological systems? How is evidence for or against the use of
these concepts produced in the application and manipulation
of the models? It will be shown in this article that neural
network models are evaluated differently depending on the
scientific context and its modeling practice. In the case of
the Hopfield model, the different modeling practices related to
theoretical physics and neurobiology played a central role for
how the model was received and used in the different scientific
communities. In theoretical physics, where the Hopfield model
has its roots, mathematical modeling is much more common and
established than in neurobiology which is strongly experiment
driven. These differences in modeling practice contributed to
the development of the new field of synthetic biology which
introduced a third type of model which combines mathematical
modeling and experimenting on biological systems and by doing
so mediates between the different modeling practices
A linear approach for sparse coding by a two-layer neural network
Many approaches to transform classification problems from non-linear to
linear by feature transformation have been recently presented in the
literature. These notably include sparse coding methods and deep neural
networks. However, many of these approaches require the repeated application of
a learning process upon the presentation of unseen data input vectors, or else
involve the use of large numbers of parameters and hyper-parameters, which must
be chosen through cross-validation, thus increasing running time dramatically.
In this paper, we propose and experimentally investigate a new approach for the
purpose of overcoming limitations of both kinds. The proposed approach makes
use of a linear auto-associative network (called SCNN) with just one hidden
layer. The combination of this architecture with a specific error function to
be minimized enables one to learn a linear encoder computing a sparse code
which turns out to be as similar as possible to the sparse coding that one
obtains by re-training the neural network. Importantly, the linearity of SCNN
and the choice of the error function allow one to achieve reduced running time
in the learning phase. The proposed architecture is evaluated on the basis of
two standard machine learning tasks. Its performances are compared with those
of recently proposed non-linear auto-associative neural networks. The overall
results suggest that linear encoders can be profitably used to obtain sparse
data representations in the context of machine learning problems, provided that
an appropriate error function is used during the learning phase
Probabilistic Auto-Associative Models and Semi-Linear PCA
Auto-Associative models cover a large class of methods used in data analysis.
In this paper, we describe the generals properties of these models when the
projection component is linear and we propose and test an easy to implement
Probabilistic Semi-Linear Auto- Associative model in a Gaussian setting. We
show it is a generalization of the PCA model to the semi-linear case. Numerical
experiments on simulated datasets and a real astronomical application highlight
the interest of this approac
Optimizing Associative Information Transfer within Content-addressable Memory
Original article can be found at: http://www.oldcitypublishing.com/IJUC/IJUC.htmlPeer reviewe
- …