11,158 research outputs found

    Adversarial Speaker Adaptation

    Full text link
    We propose a novel adversarial speaker adaptation (ASA) scheme, in which adversarial learning is applied to regularize the distribution of deep hidden features in a speaker-dependent (SD) deep neural network (DNN) acoustic model to be close to that of a fixed speaker-independent (SI) DNN acoustic model during adaptation. An additional discriminator network is introduced to distinguish the deep features generated by the SD model from those produced by the SI model. In ASA, with a fixed SI model as the reference, an SD model is jointly optimized with the discriminator network to minimize the senone classification loss, and simultaneously to mini-maximize the SI/SD discrimination loss on the adaptation data. With ASA, a senone-discriminative deep feature is learned in the SD model with a similar distribution to that of the SI model. With such a regularized and adapted deep feature, the SD model can perform improved automatic speech recognition on the target speaker's speech. Evaluated on the Microsoft short message dictation dataset, ASA achieves 14.4% and 7.9% relative word error rate improvements for supervised and unsupervised adaptation, respectively, over an SI model trained from 2600 hours data, with 200 adaptation utterances per speaker.Comment: 5 pages, 2 figures, ICASSP 201

    Attentive Adversarial Learning for Domain-Invariant Training

    Full text link
    Adversarial domain-invariant training (ADIT) proves to be effective in suppressing the effects of domain variability in acoustic modeling and has led to improved performance in automatic speech recognition (ASR). In ADIT, an auxiliary domain classifier takes in equally-weighted deep features from a deep neural network (DNN) acoustic model and is trained to improve their domain-invariance by optimizing an adversarial loss function. In this work, we propose an attentive ADIT (AADIT) in which we advance the domain classifier with an attention mechanism to automatically weight the input deep features according to their importance in domain classification. With this attentive re-weighting, AADIT can focus on the domain normalization of phonetic components that are more susceptible to domain variability and generates deep features with improved domain-invariance and senone-discriminativity over ADIT. Most importantly, the attention block serves only as an external component to the DNN acoustic model and is not involved in ASR, so AADIT can be used to improve the acoustic modeling with any DNN architectures. More generally, the same methodology can improve any adversarial learning system with an auxiliary discriminator. Evaluated on CHiME-3 dataset, the AADIT achieves 13.6% and 9.3% relative WER improvements, respectively, over a multi-conditional model and a strong ADIT baseline.Comment: 5 pages, 1 figure, ICASSP 201

    Understanding multidrug resistance in Gram-negative bacteria -- A study of a drug efflux pump AcrB and a periplasmic chaperone SurA

    Get PDF
    Multiple drug resistance (MDR) has been a severe issue in treatment and recovery from infection.Gram-negative bacteria intrinsically exhibit higher drug tolerance than Gram-positive microbes. In this thesis, two proteins involved in Gram-negative bacterial MDR were studied, AcrB and SurA. Resistance-nodulation-cell division pump AcrAB-TolC is the major MDR efflux system in Gram-negative bacteria and efficiently extrudes a broad range of substances from the cells. To study subtle conformational changes of AcrB in vivo, a reporter platform was designed. Cysteine pairs were introduced into different regions in the periplasmic domain of the protein, and the extents of disulfide bond formation were examined. Using this platform, an inactive mutant, AcrB∆loop, was created that existed as a well-folded monomer in vivo. Next, random mutageneses were performed on a functionally compromised mutant, AcrBP223G, to identify residues that restored the function loss. The mechanism of function restoration was examined. SurA is a periplasmic molecular chaperone for outer membrane biogenesis. Deletion of SurA decreased outer membrane density and bacterial drug resistance. The dependence of SurA function on structural flexibility and stability was examined. In addition, the effect of molecular crowding on SurA interaction with its outer membrane protein substrates was examined

    Conditional Teacher-Student Learning

    Full text link
    The teacher-student (T/S) learning has been shown to be effective for a variety of problems such as domain adaptation and model compression. One shortcoming of the T/S learning is that a teacher model, not always perfect, sporadically produces wrong guidance in form of posterior probabilities that misleads the student model towards a suboptimal performance. To overcome this problem, we propose a conditional T/S learning scheme, in which a "smart" student model selectively chooses to learn from either the teacher model or the ground truth labels conditioned on whether the teacher can correctly predict the ground truth. Unlike a naive linear combination of the two knowledge sources, the conditional learning is exclusively engaged with the teacher model when the teacher model's prediction is correct, and otherwise backs off to the ground truth. Thus, the student model is able to learn effectively from the teacher and even potentially surpass the teacher. We examine the proposed learning scheme on two tasks: domain adaptation on CHiME-3 dataset and speaker adaptation on Microsoft short message dictation dataset. The proposed method achieves 9.8% and 12.8% relative word error rate reductions, respectively, over T/S learning for environment adaptation and speaker-independent model for speaker adaptation.Comment: 5 pages, 1 figure, ICASSP 201

    Physics of neutrino flavor transformation through matter-neutrino resonances

    Get PDF
    In astrophysical environments such as core-collapse supernovae and neutron star-neutron star or neutron star-black hole mergers where dense neutrino media are present, matter-neutrino resonances (MNRs) can occur when the neutrino propagation potentials due to neutrino-electron and neutrino-neutrino forward scattering nearly cancel each other. We show that neutrino flavor transformation through MNRs can be explained by multiple adiabatic solutions similar to the Mikheyev-Smirnov-Wolfenstein mechanism. We find that for the normal neutrino mass hierarchy, neutrino flavor evolution through MNRs can be sensitive to the shape of neutrino spectra and the adiabaticity of the system, but such sensitivity is absent for the inverted hierarchy.Comment: 7 pages, 4 figure
    • …
    corecore