324 research outputs found
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
Cellular Automata Can Reduce Memory Requirements of Collective-State Computing
Various non-classical approaches of distributed information processing, such
as neural networks, computation with Ising models, reservoir computing, vector
symbolic architectures, and others, employ the principle of collective-state
computing. In this type of computing, the variables relevant in a computation
are superimposed into a single high-dimensional state vector, the
collective-state. The variable encoding uses a fixed set of random patterns,
which has to be stored and kept available during the computation. Here we show
that an elementary cellular automaton with rule 90 (CA90) enables space-time
tradeoff for collective-state computing models that use random dense binary
representations, i.e., memory requirements can be traded off with computation
running CA90. We investigate the randomization behavior of CA90, in particular,
the relation between the length of the randomization period and the size of the
grid, and how CA90 preserves similarity in the presence of the initialization
noise. Based on these analyses we discuss how to optimize a collective-state
computing model, in which CA90 expands representations on the fly from short
seed patterns - rather than storing the full set of random patterns. The CA90
expansion is applied and tested in concrete scenarios using reservoir computing
and vector symbolic architectures. Our experimental results show that
collective-state computing with CA90 expansion performs similarly compared to
traditional collective-state models, in which random patterns are generated
initially by a pseudo-random number generator and then stored in a large
memory.Comment: 13 pages, 11 figure
The Hyperdimensional Transform for Distributional Modelling, Regression and Classification
Hyperdimensional computing (HDC) is an increasingly popular computing
paradigm with immense potential for future intelligent applications. Although
the main ideas already took form in the 1990s, HDC recently gained significant
attention, especially in the field of machine learning and data science. Next
to efficiency, interoperability and explainability, HDC offers attractive
properties for generalization as it can be seen as an attempt to combine
connectionist ideas from neural networks with symbolic aspects. In recent work,
we introduced the hyperdimensional transform, revealing deep theoretical
foundations for representing functions and distributions as high-dimensional
holographic vectors. Here, we present the power of the hyperdimensional
transform to a broad data science audience. We use the hyperdimensional
transform as a theoretical basis and provide insight into state-of-the-art HDC
approaches for machine learning. We show how existing algorithms can be
modified and how this transform can lead to a novel, well-founded toolbox. Next
to the standard regression and classification tasks of machine learning, our
discussion includes various aspects of statistical modelling, such as
representation, learning and deconvolving distributions, sampling, Bayesian
inference, and uncertainty estimation
A Survey on Reservoir Computing and its Interdisciplinary Applications Beyond Traditional Machine Learning
Reservoir computing (RC), first applied to temporal signal processing, is a
recurrent neural network in which neurons are randomly connected. Once
initialized, the connection strengths remain unchanged. Such a simple structure
turns RC into a non-linear dynamical system that maps low-dimensional inputs
into a high-dimensional space. The model's rich dynamics, linear separability,
and memory capacity then enable a simple linear readout to generate adequate
responses for various applications. RC spans areas far beyond machine learning,
since it has been shown that the complex dynamics can be realized in various
physical hardware implementations and biological devices. This yields greater
flexibility and shorter computation time. Moreover, the neuronal responses
triggered by the model's dynamics shed light on understanding brain mechanisms
that also exploit similar dynamical processes. While the literature on RC is
vast and fragmented, here we conduct a unified review of RC's recent
developments from machine learning to physics, biology, and neuroscience. We
first review the early RC models, and then survey the state-of-the-art models
and their applications. We further introduce studies on modeling the brain's
mechanisms by RC. Finally, we offer new perspectives on RC development,
including reservoir design, coding frameworks unification, physical RC
implementations, and interaction between RC, cognitive neuroscience and
evolution.Comment: 51 pages, 19 figures, IEEE Acces
Neural Distributed Autoassociative Memories: A Survey
Introduction. Neural network models of autoassociative, distributed memory
allow storage and retrieval of many items (vectors) where the number of stored
items can exceed the vector dimension (the number of neurons in the network).
This opens the possibility of a sublinear time search (in the number of stored
items) for approximate nearest neighbors among vectors of high dimension. The
purpose of this paper is to review models of autoassociative, distributed
memory that can be naturally implemented by neural networks (mainly with local
learning rules and iterative dynamics based on information locally available to
neurons). Scope. The survey is focused mainly on the networks of Hopfield,
Willshaw and Potts, that have connections between pairs of neurons and operate
on sparse binary vectors. We discuss not only autoassociative memory, but also
the generalization properties of these networks. We also consider neural
networks with higher-order connections and networks with a bipartite graph
structure for non-binary data with linear constraints. Conclusions. In
conclusion we discuss the relations to similarity search, advantages and
drawbacks of these techniques, and topics for further research. An interesting
and still not completely resolved question is whether neural autoassociative
memories can search for approximate nearest neighbors faster than other index
structures for similarity search, in particular for the case of very high
dimensional vectors.Comment: 31 page
Reinforcement Learning
Brains rule the world, and brain-like computation is increasingly used in computers and electronic devices. Brain-like computation is about processing and interpreting data or directly putting forward and performing actions. Learning is a very important aspect. This book is on reinforcement learning which involves performing actions to achieve a goal. The first 11 chapters of this book describe and extend the scope of reinforcement learning. The remaining 11 chapters show that there is already wide usage in numerous fields. Reinforcement learning can tackle control tasks that are too complex for traditional, hand-designed, non-learning controllers. As learning computers can deal with technical complexities, the tasks of human operators remain to specify goals on increasingly higher levels. This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field
Dagstuhl News January - December 2011
"Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic
High-Performance Modelling and Simulation for Big Data Applications
This open access book was prepared as a Final Publication of the COST Action IC1406 “High-Performance Modelling and Simulation for Big Data Applications (cHiPSet)“ project. Long considered important pillars of the scientific method, Modelling and Simulation have evolved from traditional discrete numerical methods to complex data-intensive continuous analytical optimisations. Resolution, scale, and accuracy have become essential to predict and analyse natural and complex systems in science and engineering. When their level of abstraction raises to have a better discernment of the domain at hand, their representation gets increasingly demanding for computational and data resources. On the other hand, High Performance Computing typically entails the effective use of parallel and distributed processing units coupled with efficient storage, communication and visualisation systems to underpin complex data-intensive applications in distinct scientific and technical domains. It is then arguably required to have a seamless interaction of High Performance Computing with Modelling and Simulation in order to store, compute, analyse, and visualise large data sets in science and engineering. Funded by the European Commission, cHiPSet has provided a dynamic trans-European forum for their members and distinguished guests to openly discuss novel perspectives and topics of interests for these two communities. This cHiPSet compendium presents a set of selected case studies related to healthcare, biological data, computational advertising, multimedia, finance, bioinformatics, and telecommunications
- …