18,981 research outputs found
Somoclu: An Efficient Parallel Library for Self-Organizing Maps
Somoclu is a massively parallel tool for training self-organizing maps on
large data sets written in C++. It builds on OpenMP for multicore execution,
and on MPI for distributing the workload across the nodes in a cluster. It is
also able to boost training by using CUDA if graphics processing units are
available. A sparse kernel is included, which is useful for high-dimensional
but sparse data, such as the vector spaces common in text mining workflows.
Python, R and MATLAB interfaces facilitate interactive use. Apart from fast
execution, memory use is highly optimized, enabling training large emergent
maps even on a single computer.Comment: 26 pages, 9 figures. The code is available at
https://peterwittek.github.io/somoclu
Unsupervised Learning with Self-Organizing Spiking Neural Networks
We present a system comprising a hybridization of self-organized map (SOM)
properties with spiking neural networks (SNNs) that retain many of the features
of SOMs. Networks are trained in an unsupervised manner to learn a
self-organized lattice of filters via excitatory-inhibitory interactions among
populations of neurons. We develop and test various inhibition strategies, such
as growing with inter-neuron distance and two distinct levels of inhibition.
The quality of the unsupervised learning algorithm is evaluated using examples
with known labels. Several biologically-inspired classification tools are
proposed and compared, including population-level confidence rating, and
n-grams using spike motif algorithm. Using the optimal choice of parameters,
our approach produces improvements over state-of-art spiking neural networks
Mining Dynamic Document Spaces with Massively Parallel Embedded Processors
Currently Océ investigates future document management services. One of these services is accessing dynamic document spaces, i.e. improving the access to document spaces which are frequently updated (like newsgroups). This process is rather computational intensive. This paper describes the research conducted on software development for massively parallel processors. A prototype has been built which processes streams of information from specified newsgroups and transforms them into personal information maps. Although this technology does speed up the training part compared to a general purpose processor implementation, however, its real benefits emerges with larger problem dimensions because of the scalable approach. It is recommended to improve on quality of the map as well as on visualisation and to better profile the performance of the other parts of the pipeline, i.e. feature extraction and visualisation
A Neural Model for Self Organizing Feature Detectors and Classifiers in a Network Hierarchy
Many models of early cortical processing have shown how local learning rules can produce efficient, sparse-distributed codes in which nodes have responses that are statistically independent and low probability. However, it is not known how to develop a useful hierarchical representation, containing sparse-distributed codes at each level of the hierarchy, that incorporates predictive feedback from the environment. We take a step in that direction by proposing a biologically plausible neural network model that develops receptive fields, and learns to make class predictions, with or without the help of environmental feedback. The model is a new type of predictive adaptive resonance theory network called Receptive Field ARTMAP, or RAM. RAM self organizes internal category nodes that are tuned to activity distributions in topographic input maps. Each receptive field is composed of multiple weight fields that are adapted via local, on-line learning, to form smooth receptive ftelds that reflect; the statistics of the activity distributions in the input maps. When RAM generates incorrect predictions, its vigilance is raised, amplifying subtractive inhibition and sharpening receptive fields until the error is corrected. Evaluation on several classification benchmarks shows that RAM outperforms a related (but neurally implausible) model called Gaussian ARTMAP, as well as several standard neural network and statistical classifters. A topographic version of RAM is proposed, which is capable of self organizing hierarchical representations. Topographic RAM is a model for receptive field development at any level of the cortical hierarchy, and provides explanations for a variety of perceptual learning data.Defense Advanced Research Projects Agency and Office of Naval Research (N00014-95-1-0409
A Multi-signal Variant for the GPU-based Parallelization of Growing Self-Organizing Networks
Among the many possible approaches for the parallelization of self-organizing
networks, and in particular of growing self-organizing networks, perhaps the
most common one is producing an optimized, parallel implementation of the
standard sequential algorithms reported in the literature. In this paper we
explore an alternative approach, based on a new algorithm variant specifically
designed to match the features of the large-scale, fine-grained parallelism of
GPUs, in which multiple input signals are processed at once. Comparative tests
have been performed, using both parallel and sequential implementations of the
new algorithm variant, in particular for a growing self-organizing network that
reconstructs surfaces from point clouds. The experimental results show that
this approach allows harnessing in a more effective way the intrinsic
parallelism that the self-organizing networks algorithms seem intuitively to
suggest, obtaining better performances even with networks of smaller size.Comment: 17 page
- …