18 research outputs found
MCE 2018: The 1st Multi-target Speaker Detection and Identification Challenge Evaluation
The Multi-target Challenge aims to assess how well current speech technology
is able to determine whether or not a recorded utterance was spoken by one of a
large number of blacklisted speakers. It is a form of multi-target speaker
detection based on real-world telephone conversations. Data recordings are
generated from call center customer-agent conversations. The task is to measure
how accurately one can detect 1) whether a test recording is spoken by a
blacklisted speaker, and 2) which specific blacklisted speaker was talking.
This paper outlines the challenge and provides its baselines, results, and
discussions.Comment: http://mce.csail.mit.edu . arXiv admin note: text overlap with
arXiv:1807.0666
Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification
There are a number of studies about extraction of bottleneck (BN) features
from deep neural networks (DNNs)trained to discriminate speakers, pass-phrases
and triphone states for improving the performance of text-dependent speaker
verification (TD-SV). However, a moderate success has been achieved. A recent
study [1] presented a time contrastive learning (TCL) concept to explore the
non-stationarity of brain signals for classification of brain states. Speech
signals have similar non-stationarity property, and TCL further has the
advantage of having no need for labeled data. We therefore present a TCL based
BN feature extraction method. The method uniformly partitions each speech
utterance in a training dataset into a predefined number of multi-frame
segments. Each segment in an utterance corresponds to one class, and class
labels are shared across utterances. DNNs are then trained to discriminate all
speech frames among the classes to exploit the temporal structure of speech. In
addition, we propose a segment-based unsupervised clustering algorithm to
re-assign class labels to the segments. TD-SV experiments were conducted on the
RedDots challenge database. The TCL-DNNs were trained using speech data of
fixed pass-phrases that were excluded from the TD-SV evaluation set, so the
learned features can be considered phrase-independent. We compare the
performance of the proposed TCL bottleneck (BN) feature with those of
short-time cepstral features and BN features extracted from DNNs discriminating
speakers, pass-phrases, speaker+pass-phrase, as well as monophones whose labels
and boundaries are generated by three different automatic speech recognition
(ASR) systems. Experimental results show that the proposed TCL-BN outperforms
cepstral features and speaker+pass-phrase discriminant BN features, and its
performance is on par with those of ASR derived BN features. Moreover,....Comment: Copyright (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other work
Context-aware Fine-tuning of Self-supervised Speech Models
Self-supervised pre-trained transformers have improved the state of the art
on a variety of speech tasks. Due to the quadratic time and space complexity of
self-attention, they usually operate at the level of relatively short (e.g.,
utterance) segments. In this paper, we study the use of context, i.e.,
surrounding segments, during fine-tuning and propose a new approach called
context-aware fine-tuning. We attach a context module on top of the last layer
of a pre-trained model to encode the whole segment into a context embedding
vector which is then used as an additional feature for the final prediction.
During the fine-tuning stage, we introduce an auxiliary loss that encourages
this context embedding vector to be similar to context vectors of surrounding
segments. This allows the model to make predictions without access to these
surrounding segments at inference time and requires only a tiny overhead
compared to standard fine-tuned models. We evaluate the proposed approach using
the SLUE and Libri-light benchmarks for several downstream tasks: Automatic
speech recognition (ASR), named entity recognition (NER), and sentiment
analysis (SA). The results show that context-aware fine-tuning not only
outperforms a standard fine-tuning baseline but also rivals a strong context
injection baseline that uses neighboring speech segments during inference