19 research outputs found
DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation
In previous works, only parameter weights of ASR models are optimized under
fixed-topology architecture. However, the design of successful model
architecture has always relied on human experience and intuition. Besides, many
hyperparameters related to model architecture need to be manually tuned.
Therefore in this paper, we propose an ASR approach with efficient
gradient-based architecture search, DARTS-ASR. In order to examine the
generalizability of DARTS-ASR, we apply our approach not only on many languages
to perform monolingual ASR, but also on a multilingual ASR setting. Following
previous works, we conducted experiments on a multilingual dataset, IARPA
BABEL. The experiment results show that our approach outperformed the baseline
fixed-topology architecture by 10.2% and 10.0% relative reduction on character
error rates under monolingual and multilingual ASR settings respectively.
Furthermore, we perform some analysis on the searched architectures by
DARTS-ASR.Comment: Accepted at INTERSPEECH 202
Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition
Low-resource automatic speech recognition (ASR) is challenging, as the
low-resource target language data cannot well train an ASR model. To solve this
issue, meta-learning formulates ASR for each source language into many small
ASR tasks and meta-learns a model initialization on all tasks from different
source languages to access fast adaptation on unseen target languages. However,
for different source languages, the quantity and difficulty vary greatly
because of their different data scales and diverse phonological systems, which
leads to task-quantity and task-difficulty imbalance issues and thus a failure
of multilingual meta-learning ASR (MML-ASR). In this work, we solve this
problem by developing a novel adversarial meta sampling (AMS) approach to
improve MML-ASR. When sampling tasks in MML-ASR, AMS adaptively determines the
task sampling probability for each source language. Specifically, for each
source language, if the query loss is large, it means that its tasks are not
well sampled to train ASR model in terms of its quantity and difficulty and
thus should be sampled more frequently for extra learning. Inspired by this
fact, we feed the historical task query loss of all source language domain into
a network to learn a task sampling policy for adversarially increasing the
current query loss of MML-ASR. Thus, the learnt task sampling policy can master
the learning situation of each language and thus predicts good task sampling
probability for each language for more effective learning. Finally, experiment
results on two multilingual datasets show significant performance improvement
when applying our AMS on MML-ASR, and also demonstrate the applicability of AMS
to other low-resource speech tasks and transfer learning ASR approaches.Comment: accepted in AAAI202