10 research outputs found
Black-box Generalization of Machine Teaching
Hypothesis-pruning maximizes the hypothesis updates for active learning to
find those desired unlabeled data. An inherent assumption is that this learning
manner can derive those updates into the optimal hypothesis. However, its
convergence may not be guaranteed well if those incremental updates are
negative and disordered. In this paper, we introduce a black-box teaching
hypothesis employing a tighter slack term
to replace
the typical for pruning. Theoretically, we prove that, under the
guidance of this teaching hypothesis, the learner can converge into a tighter
generalization error and label complexity bound than those non-educated
learners who do not receive any guidance from a teacher:1) the generalization
error upper bound can be reduced from to approximately
, and 2) the label complexity upper bound can
be decreased from to
approximately . To be
strict with our assumption, self-improvement of teaching is firstly proposed
when loosely approximates . Against learning, we further
consider two teaching scenarios: teaching a white-box and black-box learner.
Experiments verify this idea and show better generalization performance than
the fundamental active learning strategies, such as IWAL, IWAL-D, etc
Adaptation Algorithms for Neural Network-Based Speech Recognition: An Overview
We present a structured overview of adaptation algorithms for neural
network-based speech recognition, considering both hybrid hidden Markov model /
neural network systems and end-to-end neural network systems, with a focus on
speaker adaptation, domain adaptation, and accent adaptation. The overview
characterizes adaptation algorithms as based on embeddings, model parameter
adaptation, or data augmentation. We present a meta-analysis of the performance
of speech recognition adaptation algorithms, based on relative error rate
reductions as reported in the literature.Comment: Submitted to IEEE Open Journal of Signal Processing. 30 pages, 27
figure