436 research outputs found
Self-Organising Stochastic Encoders
The processing of mega-dimensional data, such as images, scales linearly with
image size only if fixed size processing windows are used. It would be very
useful to be able to automate the process of sizing and interconnecting the
processing windows. A stochastic encoder that is an extension of the standard
Linde-Buzo-Gray vector quantiser, called a stochastic vector quantiser (SVQ),
includes this required behaviour amongst its emergent properties, because it
automatically splits the input space into statistically independent subspaces,
which it then separately encodes. Various optimal SVQs have been obtained, both
analytically and numerically. Analytic solutions which demonstrate how the
input space is split into independent subspaces may be obtained when an SVQ is
used to encode data that lives on a 2-torus (e.g. the superposition of a pair
of uncorrelated sinusoids). Many numerical solutions have also been obtained,
using both SVQs and chains of linked SVQs: (1) images of multiple independent
targets (encoders for single targets emerge), (2) images of multiple correlated
targets (various types of encoder for single and multiple targets emerge), (3)
superpositions of various waveforms (encoders for the separate waveforms emerge
- this is a type of independent component analysis (ICA)), (4) maternal and
foetal ECGs (another example of ICA), (5) images of textures (orientation maps
and dominance stripes emerge). Overall, SVQs exhibit a rich variety of
self-organising behaviour, which effectively discovers the internal structure
of the training data. This should have an immediate impact on "intelligent"
computation, because it reduces the need for expert human intervention in the
design of data processing algorithms.Comment: 23 pages, 23 figure
Stochastic Vector Quantisers
In this paper a stochastic generalisation of the standard Linde-Buzo-Gray
(LBG) approach to vector quantiser (VQ) design is presented, in which the
encoder is implemented as the sampling of a vector of code indices from a
probability distribution derived from the input vector, and the decoder is
implemented as a superposition of reconstruction vectors, and the stochastic VQ
is optimised using a minimum mean Euclidean reconstruction distortion
criterion, as in the LBG case. Numerical simulations are used to demonstrate
how this leads to self-organisation of the stochastic VQ, where different
stochastically sampled code indices become associated with different input
subspaces. This property may be used to automate the process of splitting
high-dimensional input vectors into low-dimensional blocks before encoding
them.Comment: 22 pages, 12 figure
Deep learning algorithms and their relevance: A review
Nowadays, the most revolutionary area in computer science is deep learning algorithms and models. This paper discusses deep learning and various supervised, unsupervised, and reinforcement learning models. An overview of Artificial neural network(ANN), Convolutional neural network(CNN), Recurrent neural network (RNN), Long short-term memory(LSTM), Self-organizing maps(SOM), Restricted Boltzmann machine(RBM), Deep Belief Network (DBN), Generative adversarial network(GAN), autoencoders, long short-term memory(LSTM), Gated Recurrent Unit(GRU) and Bidirectional-LSTM is provided. Various deep-learning application areas are also discussed. The most trending Chat GPT, which can understand natural language and respond to needs in various ways, uses supervised and reinforcement learning techniques. Additionally, the limitations of deep learning are discussed. This paper provides a snapshot of deep learning
The hardware implementation of an artificial neural network using stochastic pulse rate encoding principles
In this thesis the development of a hardware artificial neuron device and artificial neural network using stochastic pulse rate encoding principles is considered. After a review of neural network architectures and algorithmic approaches suitable for hardware implementation, a critical review of hardware techniques which have been considered in analogue and digital systems is presented. New results are presented demonstrating the potential of two learning schemes which adapt by the use of a single reinforcement signal. The techniques for computation using stochastic pulse rate encoding are presented and extended with new novel circuits relevant to the hardware implementation of an artificial neural network. The generation of random numbers is the key to the encoding of data into the stochastic pulse rate domain. The formation of random numbers and multiple random bit sequences from a single PRBS generator have been investigated. Two techniques, Simulated Annealing and Genetic Algorithms, have been applied successfully to the problem of optimising the configuration of a PRBS random number generator for the formation of multiple random bit sequences and hence random numbers. A complete hardware design for an artificial neuron using stochastic pulse rate encoded signals has been described, designed, simulated, fabricated and tested before configuration of the device into a network to perform simple test problems. The implementation has shown that the processing elements of the artificial neuron are small and simple, but that there can be a significant overhead for the encoding of information into the stochastic pulse rate domain. The stochastic artificial neuron has the capability of on-line weight adaption. The implementation of reinforcement schemes using the stochastic neuron as a basic element are discussed
Cortex Inspired Learning to Recover Damaged Signal Modality with ReD-SOM Model
Recent progress in the fields of AI and cognitive sciences opens up new
challenges that were previously inaccessible to study. One of such modern tasks
is recovering lost data of one modality by using the data from another one. A
similar effect (called the McGurk Effect) has been found in the functioning of
the human brain. Observing this effect, one modality of information interferes
with another, changing its perception. In this paper, we propose a way to
simulate such an effect and use it to reconstruct lost data modalities by
combining Variational Auto-Encoders, Self-Organizing Maps, and Hebb connections
in a unified ReD-SOM (Reentering Deep Self-organizing Map) model. We are
inspired by human's capability to use different zones of the brain in different
modalities, in case of having a lack of information in one of the modalities.
This new approach not only improves the analysis of ambiguous data but also
restores the intended signal! The results obtained on the multimodal dataset
demonstrate an increase of quality of the signal reconstruction. The effect is
remarkable both visually and quantitatively, specifically in presence of a
significant degree of signal's distortion.Comment: 9 pages, 8 images, unofficial version, currently under revie
Evolutionary swarm robotics: a theoretical and methodological itinerary from individual neuro-controllers to collective behaviours
In the last decade, swarm robotics gathered much attention in the research community. By drawing inspiration from social insects and other self-organizing systems, it focuses on large robot groups featuring distributed control, adaptation, high robustness, and flexibility. Various reasons lay behind this interest in similar multi-robot systems. Above all, inspiration comes from the observation of social activities, which are based on concepts like division of labor, cooperation, and communication. If societies are organized in such a way in order to be more efficient, then robotic groups also could benefit from similar paradigms
Improving deep learning performance with missing values via deletion and compensation
Proceedings of: International Work conference on the Interplay between Natural and Artificial Computation (IWINAC 2015)Missing values in a dataset is one of the most common difficulties in real applications. Many different techniques based on machine learning have been proposed in the literature to face this problem. In this work, the great representation capability of the stacked denoising auto-encoders is used to obtain a new method of imputating missing values based on two ideas: deletion and compensation. This method improves imputation performance by artificially deleting values in the input features and using them as targets in the training process. Nevertheless, although the deletion of samples is demonstrated to be really efficient, it may cause an imbalance between the distributions of the training and the test sets. In order to solve this issue, a compensation mechanism is proposed based on a slight modification of the error function to be optimized. Experiments over several datasets show that the deletion and compensation not only involve improvements in imputation but also in classification in comparison with other classical techniques.The work of A. R. Figueiras-Vidal has been partly supported by Grant Macro-ADOBE (TEC 2015-67719-P, MINECO/FEDER&FSE). The work of J.L. Sancho-Gómez has been partly supported by Grant AES 2017 (PI17/00771, MINECO/FEDER)
- …