14,739 research outputs found
Intrinsic adaptation in autonomous recurrent neural networks
A massively recurrent neural network responds on one side to input stimuli
and is autonomously active, on the other side, in the absence of sensory
inputs. Stimuli and information processing depends crucially on the qualia of
the autonomous-state dynamics of the ongoing neural activity. This default
neural activity may be dynamically structured in time and space, showing
regular, synchronized, bursting or chaotic activity patterns.
We study the influence of non-synaptic plasticity on the default dynamical
state of recurrent neural networks. The non-synaptic adaption considered acts
on intrinsic neural parameters, such as the threshold and the gain, and is
driven by the optimization of the information entropy. We observe, in the
presence of the intrinsic adaptation processes, three distinct and globally
attracting dynamical regimes, a regular synchronized, an overall chaotic and an
intermittent bursting regime. The intermittent bursting regime is characterized
by intervals of regular flows, which are quite insensitive to external stimuli,
interseeded by chaotic bursts which respond sensitively to input signals. We
discuss these finding in the context of self-organized information processing
and critical brain dynamics.Comment: 24 pages, 8 figure
Looking Beyond Label Noise: Shifted Label Distribution Matters in Distantly Supervised Relation Extraction
In recent years there is a surge of interest in applying distant supervision
(DS) to automatically generate training data for relation extraction (RE). In
this paper, we study the problem what limits the performance of DS-trained
neural models, conduct thorough analyses, and identify a factor that can
influence the performance greatly, shifted label distribution. Specifically, we
found this problem commonly exists in real-world DS datasets, and without
special handing, typical DS-RE models cannot automatically adapt to this shift,
thus achieving deteriorated performance. To further validate our intuition, we
develop a simple yet effective adaptation method for DS-trained models, bias
adjustment, which updates models learned over the source domain (i.e., DS
training set) with a label distribution estimated on the target domain (i.e.,
test set). Experiments demonstrate that bias adjustment achieves consistent
performance gains on DS-trained models, especially on neural models, with an up
to 23% relative F1 improvement, which verifies our assumptions. Our code and
data can be found at
\url{https://github.com/INK-USC/shifted-label-distribution}.Comment: 13 pages: 10 pages paper, 3 pages appendix. Appears at EMNLP 201
Acoustic Echo and Noise Cancellation System for Hand-Free Telecommunication using Variable Step Size Algorithms
In this paper, acoustic echo cancellation with doubletalk detection system is implemented for a hand-free telecommunication system using Matlab. Here adaptive noise canceller with blind source separation (ANC-BSS) system is proposed to remove both background noise and far-end speaker echo signal in presence of double-talk. During the absence of double-talk, far-end speaker echo signal is cancelled by adaptive echo canceller. Both adaptive noise canceller and adaptive echo canceller are implemented using LMS, NLMS, VSLMS and VSNLMS algorithms. The normalized cross-correlation method is used for double-talk detection. VSNLMS has shown its superiority over all other algorithms both for double-talk and in absence of double-talk. During the absence of double-talk it shows its superiority in terms of increment in ERLE and decrement in misalignment. In presence of double-talk, it shows improvement in SNR of near-end speaker signal
- …