4 research outputs found
Learning in Convolutional Neural Networks Accelerated by Transfer Entropy
Recently, there is a growing interest in applying Transfer Entropy (TE) in quantifying the effective connectivity between artificial neurons. In a feedforward network, the TE can be used to quantify the relationships between neuron output pairs located in different layers. Our focus is on how to include the TE in the learning mechanisms of a Convolutional Neural Network (CNN) architecture. We introduce a novel training mechanism for CNN architectures which integrates the TE feedback connections. Adding the TE feedback parameter accelerates the training process, as fewer epochs are needed. On the flip side, it adds computational overhead to each epoch. According to our experiments on CNN classifiers, to achieve a reasonable computational overhead–accuracy trade-off, it is efficient to consider only the inter-neural information transfer of the neuron pairs between the last two fully connected layers. The TE acts as a smoothing factor, generating stability and becoming active only periodically, not after processing each input sample. Therefore, we can consider the TE is in our model a slowly changing meta-parameter
Dynamical systems as temporal feature spaces
Parameterized state space models in the form of recurrent networks are often
used in machine learning to learn from data streams exhibiting temporal
dependencies. To break the black box nature of such models it is important to
understand the dynamical features of the input driving time series that are
formed in the state space. We propose a framework for rigorous analysis of such
state representations in vanishing memory state space models such as echo state
networks (ESN). In particular, we consider the state space a temporal feature
space and the readout mapping from the state space a kernel machine operating
in that feature space. We show that: (1) The usual ESN strategy of randomly
generating input-to-state, as well as state coupling leads to shallow memory
time series representations, corresponding to cross-correlation operator with
fast exponentially decaying coefficients; (2) Imposing symmetry on dynamic
coupling yields a constrained dynamic kernel matching the input time series
with straightforward exponentially decaying motifs or exponentially decaying
motifs of the highest frequency; (3) Simple cycle high-dimensional reservoir
topology specified only through two free parameters can implement deep memory
dynamic kernels with a rich variety of matching motifs. We quantify richness of
feature representations imposed by dynamic kernels and demonstrate that for
dynamic kernel associated with cycle reservoir topology, the kernel richness
undergoes a phase transition close to the edge of stability.Comment: 45 pages, 17 figures, accepte
Improving recurrent neural network performance using transfer entropy
Reservoir computing approaches have been successfully applied to a variety of tasks. An inherent problem of these approaches, is, however, their variation in performance due to fixed random initialisation of the reservoir. Self-organised approaches like intrinsic plasticity have been applied to improve reservoir quality, but do not take the task of the system into account. We present an approach to improve the hidden layer of recurrent neural networks, guided by the learning goal of the system. Our reservoir adaptation optimises the information transfer at each individual unit, dependent on properties of the information transfer between input and output of the system. Using synthetic data, we show that this reservoir adaptation improves the performance of offline echo state learning and Recursive Least Squares Online Learning