1,013 research outputs found
Integrated Parameter-Efficient Tuning for General-Purpose Audio Models
The advent of hyper-scale and general-purpose pre-trained models is shifting
the paradigm of building task-specific models for target tasks. In the field of
audio research, task-agnostic pre-trained models with high transferability and
adaptability have achieved state-of-the-art performances through fine-tuning
for downstream tasks. Nevertheless, re-training all the parameters of these
massive models entails an enormous amount of time and cost, along with a huge
carbon footprint. To overcome these limitations, the present study explores and
applies efficient transfer learning methods in the audio domain. We also
propose an integrated parameter-efficient tuning (IPET) framework by
aggregating the embedding prompt (a prompt-based learning approach), and the
adapter (an effective transfer learning method). We demonstrate the efficacy of
the proposed framework using two backbone pre-trained audio models with
different characteristics: the audio spectrogram transformer and wav2vec 2.0.
The proposed IPET framework exhibits remarkable performance compared to
fine-tuning method with fewer trainable parameters in four downstream tasks:
sound event classification, music genre classification, keyword spotting, and
speaker verification. Furthermore, the authors identify and analyze the
shortcomings of the IPET framework, providing lessons and research directions
for parameter efficient tuning in the audio domain.Comment: 5 pages, 3 figures, submit to ICASSP202
VOICE BIOMETRICS UNDER MISMATCHED NOISE CONDITIONS
This thesis describes research into effective voice biometrics (speaker recognition) under mismatched noise conditions. Over the last two decades, this class of biometrics has been the subject of considerable research due to its various applications in such areas as telephone banking, remote access control and surveillance. One of the main challenges associated with the deployment of voice biometrics in practice is that of undesired variations in speech characteristics caused by environmental noise. Such variations can in turn lead to a mismatch between the corresponding test and reference material from the same speaker. This is found to adversely affect the performance of speaker recognition in terms of accuracy.
To address the above problem, a novel approach is introduced and investigated. The proposed method is based on minimising the noise mismatch between reference speaker models and the given test utterance, and involves a new form of Test-Normalisation (T-Norm) for further enhancing matching scores under the aforementioned adverse operating conditions. Through experimental investigations, based on the two main classes of speaker recognition (i.e. verification/ open-set identification), it is shown that the proposed approach can significantly improve the performance accuracy under mismatched noise conditions.
In order to further improve the recognition accuracy in severe mismatch conditions, an approach to enhancing the above stated method is proposed. This, which involves providing a closer adjustment of the reference speaker models to the noise condition in the test utterance, is shown to considerably increase the accuracy in extreme cases of noisy test data. Moreover, to tackle the computational burden associated with the use of the enhanced approach with open-set identification, an efficient algorithm for its realisation in this context is introduced and evaluated.
The thesis presents a detailed description of the research undertaken, describes the experimental investigations and provides a thorough analysis of the outcomes
Transfer Learning for Speech and Language Processing
Transfer learning is a vital technique that generalizes models trained for
one setting or task to other settings or tasks. For example in speech
recognition, an acoustic model trained for one language can be used to
recognize speech in another language, with little or no re-training data.
Transfer learning is closely related to multi-task learning (cross-lingual vs.
multilingual), and is traditionally studied in the name of `model adaptation'.
Recent advance in deep learning shows that transfer learning becomes much
easier and more effective with high-level abstract features learned by deep
models, and the `transfer' can be conducted not only between data distributions
and data types, but also between model structures (e.g., shallow nets and deep
nets) or even model types (e.g., Bayesian models and neural models). This
review paper summarizes some recent prominent research towards this direction,
particularly for speech and language processing. We also report some results
from our group and highlight the potential of this very interesting research
field.Comment: 13 pages, APSIPA 201
Histogram equalization for robust text-independent speaker verification in telephone environments
Word processed copy.
Includes bibliographical references
- …