473 research outputs found
A Computational Investigation of Neural Dynamics and Network Structure
With the overall goal of illuminating the relationship between neural dynamics and neural network
structure, this thesis presents a) a computer model of a network infrastructure capable of global broadcast
and competition, and b) a study of various convergence properties of spike-timing dependent plasticity
(STDP) in a recurrent neural network.
The first part of the thesis explores the parameter space of a possible Global Neuronal Workspace (GNW)
realised in a novel computational network model using stochastic connectivity. The structure of this
model is analysed in light of the characteristic dynamics of a GNW: broadcast, reverberation, and
competition. It is found even with careful consideration of the balance between excitation and inhibition,
the structural choices do not allow agreement with the GNW dynamics, and the implications of this are
addressed. An additional level of competition – access competition – is added, discussed, and found to be
more conducive to winner-takes-all competition.
The second part of the thesis investigates the formation of synaptic structure due to neural and synaptic
dynamics. From previous theoretical and modelling work, it is predicted that homogeneous stimulation in
a recurrent neural network with STDP will create a self-stabilising equilibrium amongst synaptic weights,
while heterogeneous stimulation will induce structured synaptic changes. A new factor in modulating the
synaptic weight equilibrium is suggested from the experimental evidence presented: anti-correlation due
to inhibitory neurons. It is observed that the synaptic equilibrium creates competition amongst synapses,
and those specifically stimulated during heterogeneous stimulation win out. Further investigation is
carried out in order to assess the effect that more complex STDP rules would have on synaptic dynamics,
varying parameters of a trace STDP model. There is little qualitative effect on synaptic dynamics under
low frequency (< 25Hz) conditions, justifying the use of simple STDP until further experimental or
theoretical evidence suggests otherwise
Deep neural network techniques for monaural speech enhancement: state of the art analysis
Deep neural networks (DNN) techniques have become pervasive in domains such
as natural language processing and computer vision. They have achieved great
success in these domains in task such as machine translation and image
generation. Due to their success, these data driven techniques have been
applied in audio domain. More specifically, DNN models have been applied in
speech enhancement domain to achieve denosing, dereverberation and
multi-speaker separation in monaural speech enhancement. In this paper, we
review some dominant DNN techniques being employed to achieve speech
separation. The review looks at the whole pipeline of speech enhancement from
feature extraction, how DNN based tools are modelling both global and local
features of speech and model training (supervised and unsupervised). We also
review the use of speech-enhancement pre-trained models to boost speech
enhancement process. The review is geared towards covering the dominant trends
with regards to DNN application in speech enhancement in speech obtained via a
single speaker.Comment: conferenc
Integrating Plug-and-Play Data Priors with Weighted Prediction Error for Speech Dereverberation
Speech dereverberation aims to alleviate the detrimental effects of
late-reverberant components. While the weighted prediction error (WPE) method
has shown superior performance in dereverberation, there is still room for
further improvement in terms of performance and robustness in complex and noisy
environments. Recent research has highlighted the effectiveness of integrating
physics-based and data-driven methods, enhancing the performance of various
signal processing tasks while maintaining interpretability. Motivated by these
advancements, this paper presents a novel dereverberation frame-work, which
incorporates data-driven methods for capturing speech priors within the WPE
framework. The plug-and-play strategy (PnP), specifically the regularization by
denoising (RED) strategy, is utilized to incorporate speech prior information
learnt from data during the optimization problem solving iterations.
Experimental results validate the effectiveness of the proposed approach
Investigating Generative Adversarial Networks based Speech Dereverberation for Robust Speech Recognition
We investigate the use of generative adversarial networks (GANs) in speech
dereverberation for robust speech recognition. GANs have been recently studied
for speech enhancement to remove additive noises, but there still lacks of a
work to examine their ability in speech dereverberation and the advantages of
using GANs have not been fully established. In this paper, we provide deep
investigations in the use of GAN-based dereverberation front-end in ASR. First,
we study the effectiveness of different dereverberation networks (the generator
in GAN) and find that LSTM leads a significant improvement as compared with
feed-forward DNN and CNN in our dataset. Second, further adding residual
connections in the deep LSTMs can boost the performance as well. Finally, we
find that, for the success of GAN, it is important to update the generator and
the discriminator using the same mini-batch data during training. Moreover,
using reverberant spectrogram as a condition to discriminator, as suggested in
previous studies, may degrade the performance. In summary, our GAN-based
dereverberation front-end achieves 14%-19% relative CER reduction as compared
to the baseline DNN dereverberation network when tested on a strong
multi-condition training acoustic model.Comment: Interspeech 201
Alpha power increase after transcranial alternating current stimulation at alpha frequency (α-tacs) reflects plastic changes rather than entrainment
Background:
Periodic stimulation of occipital areas using transcranial alternating current stimulation (tACS) at alpha (α) frequency (8–12 Hz) enhances electroencephalographic (EEG) α-oscillation long after tACS-offset. Two mechanisms have been suggested to underlie these changes in oscillatory EEG activity: tACS-induced entrainment of brain oscillations and/or tACS-induced changes in oscillatory circuits by spike-timing dependent plasticity.<p></p>
Objective:
We tested to what extent plasticity can account for tACS-aftereffects when controlling for entrainment “echoes.” To this end, we used a novel, intermittent tACS protocol and investigated the strength of the aftereffect as a function of phase continuity between successive tACS episodes, as well as the match between stimulation frequency and endogenous α-frequency.<p></p>
Methods:
12 healthy participants were stimulated at around individual α-frequency for 15–20 min in four sessions using intermittent tACS or sham. Successive tACS events were either phase-continuous or phase-discontinuous, and either 3 or 8 s long. EEG α-phase and power changes were compared after and between episodes of α-tACS across conditions and against sham.<p></p>
Results:
α-aftereffects were successfully replicated after intermittent stimulation using 8-s but not 3-s trains. These aftereffects did not reveal any of the characteristics of entrainment echoes in that they were independent of tACS phase-continuity and showed neither prolonged phase alignment nor frequency synchronization to the exact stimulation frequency.<p></p>
Conclusion:
Our results indicate that plasticity mechanisms are sufficient to explain α-aftereffects in response to α-tACS, and inform models of tACS-induced plasticity in oscillatory circuits. Modifying brain oscillations with tACS holds promise for clinical applications in disorders involving abnormal neural synchrony
- …