2 research outputs found
Learning acoustic word embeddings with phonetically associated triplet network
Previous researches on acoustic word embeddings used in query-by-example
spoken term detection have shown remarkable performance improvements when using
a triplet network. However, the triplet network is trained using only a limited
information about acoustic similarity between words. In this paper, we propose
a novel architecture, phonetically associated triplet network (PATN), which
aims at increasing discriminative power of acoustic word embeddings by
utilizing phonetic information as well as word identity. The proposed model is
learned to minimize a combined loss function that was made by introducing a
cross entropy loss to the lower layer of LSTM-based triplet network. We
observed that the proposed method performs significantly better than the
baseline triplet network on a word discrimination task with the WSJ dataset
resulting in over 20% relative improvement in recall rate at 1.0 false alarm
per hour. Finally, we examined the generalization ability by conducting the
out-of-domain test on the RM dataset.Comment: 5 pages, 4 figures, submitted to ICASSP 201
IEEE SLT 2021 Alpha-mini Speech Challenge: Open Datasets, Tracks, Rules and Baselines
The IEEE Spoken Language Technology Workshop (SLT) 2021 Alpha-mini Speech
Challenge (ASC) is intended to improve research on keyword spotting (KWS) and
sound source location (SSL) on humanoid robots. Many publications report
significant improvements in deep learning based KWS and SSL on open source
datasets in recent years. For deep learning model training, it is necessary to
expand the data coverage to improve the robustness of model. Thus, simulating
multi-channel noisy and reverberant data from single-channel speech, noise,
echo and room impulsive response (RIR) is widely adopted. However, this
approach may generate mismatch between simulated data and recorded data in real
application scenarios, especially echo data. In this challenge, we open source
a sizable speech, keyword, echo and noise corpus for promoting data-driven
methods, particularly deep-learning approaches on KWS and SSL. We also choose
Alpha-mini, a humanoid robot produced by UBTECH equipped with a built-in
four-microphone array on its head, to record development and evaluation sets
under the actual Alpha-mini robot application scenario, including noise as well
as echo and mechanical noise generated by the robot itself for model
evaluation. Furthermore, we illustrate the rules, evaluation methods and
baselines for researchers to quickly assess their achievements and optimize
their models.Comment: Accepted at IEEE SLT 202