983 research outputs found
Visual Speech Enhancement
When video is shot in noisy environment, the voice of a speaker seen in the
video can be enhanced using the visible mouth movements, reducing background
noise. While most existing methods use audio-only inputs, improved performance
is obtained with our visual speech enhancement, based on an audio-visual neural
network. We include in the training data videos to which we added the voice of
the target speaker as background noise. Since the audio input is not sufficient
to separate the voice of a speaker from his own voice, the trained model better
exploits the visual input and generalizes well to different noise types. The
proposed model outperforms prior audio visual methods on two public lipreading
datasets. It is also the first to be demonstrated on a dataset not designed for
lipreading, such as the weekly addresses of Barack Obama.Comment: Accepted to Interspeech 2018. Supplementary video:
https://www.youtube.com/watch?v=nyYarDGpcY
Tune-In: Training Under Negative Environments with Interference for Attention Networks Simulating Cocktail Party Effect
We study the cocktail party problem and propose a novel attention network
called Tune-In, abbreviated for training under negative environments with
interference. It firstly learns two separate spaces of speaker-knowledge and
speech-stimuli based on a shared feature space, where a new block structure is
designed as the building block for all spaces, and then cooperatively solves
different tasks. Between the two spaces, information is cast towards each other
via a novel cross- and dual-attention mechanism, mimicking the bottom-up and
top-down processes of a human's cocktail party effect. It turns out that
substantially discriminative and generalizable speaker representations can be
learnt in severely interfered conditions via our self-supervised training. The
experimental results verify this seeming paradox. The learnt speaker embedding
has superior discriminative power than a standard speaker verification method;
meanwhile, Tune-In achieves remarkably better speech separation performances in
terms of SI-SNRi and SDRi consistently in all test modes, and especially at
lower memory and computational consumption, than state-of-the-art benchmark
systems.Comment: Accepted in AAAI 202
NeuroHeed: Neuro-Steered Speaker Extraction using EEG Signals
Humans possess the remarkable ability to selectively attend to a single
speaker amidst competing voices and background noise, known as selective
auditory attention. Recent studies in auditory neuroscience indicate a strong
correlation between the attended speech signal and the corresponding brain's
elicited neuronal activities, which the latter can be measured using affordable
and non-intrusive electroencephalography (EEG) devices. In this study, we
present NeuroHeed, a speaker extraction model that leverages EEG signals to
establish a neuronal attractor which is temporally associated with the speech
stimulus, facilitating the extraction of the attended speech signal in a
cocktail party scenario. We propose both an offline and an online NeuroHeed,
with the latter designed for real-time inference. In the online NeuroHeed, we
additionally propose an autoregressive speaker encoder, which accumulates past
extracted speech signals for self-enrollment of the attended speaker
information into an auditory attractor, that retains the attentional momentum
over time. Online NeuroHeed extracts the current window of the speech signals
with guidance from both attractors. Experimental results demonstrate that
NeuroHeed effectively extracts brain-attended speech signals, achieving high
signal quality, excellent perceptual quality, and intelligibility in a
two-speaker scenario
Audio-visual End-to-end Multi-channel Speech Separation, Dereverberation and Recognition
Accurate recognition of cocktail party speech containing overlapping
speakers, noise and reverberation remains a highly challenging task to date.
Motivated by the invariance of visual modality to acoustic signal corruption,
an audio-visual multi-channel speech separation, dereverberation and
recognition approach featuring a full incorporation of visual information into
all system components is proposed in this paper. The efficacy of the video
input is consistently demonstrated in mask-based MVDR speech separation,
DNN-WPE or spectral mapping (SpecM) based speech dereverberation front-end and
Conformer ASR back-end. Audio-visual integrated front-end architectures
performing speech separation and dereverberation in a pipelined or joint
fashion via mask-based WPD are investigated. The error cost mismatch between
the speech enhancement front-end and ASR back-end components is minimized by
end-to-end jointly fine-tuning using either the ASR cost function alone, or its
interpolation with the speech enhancement loss. Experiments were conducted on
the mixture overlapped and reverberant speech data constructed using simulation
or replay of the Oxford LRS2 dataset. The proposed audio-visual multi-channel
speech separation, dereverberation and recognition systems consistently
outperformed the comparable audio-only baseline by 9.1% and 6.2% absolute
(41.7% and 36.0% relative) word error rate (WER) reductions. Consistent speech
enhancement improvements were also obtained on PESQ, STOI and SRMR scores.Comment: IEEE/ACM Transactions on Audio, Speech, and Language Processin
Supervised Speaker Embedding De-Mixing in Two-Speaker Environment
Separating different speaker properties from a multi-speaker environment is
challenging. Instead of separating a two-speaker signal in signal space like
speech source separation, a speaker embedding de-mixing approach is proposed.
The proposed approach separates different speaker properties from a two-speaker
signal in embedding space. The proposed approach contains two steps. In step
one, the clean speaker embeddings are learned and collected by a residual TDNN
based network. In step two, the two-speaker signal and the embedding of one of
the speakers are both input to a speaker embedding de-mixing network. The
de-mixing network is trained to generate the embedding of the other speaker by
reconstruction loss. Speaker identification accuracy and the cosine similarity
score between the clean embeddings and the de-mixed embeddings are used to
evaluate the quality of the obtained embeddings. Experiments are done in two
kind of data: artificial augmented two-speaker data (TIMIT) and real world
recording of two-speaker data (MC-WSJ). Six different speaker embedding
de-mixing architectures are investigated. Comparing with the performance on the
clean speaker embeddings, the obtained results show that one of the proposed
architectures obtained close performance, reaching 96.9% identification
accuracy and 0.89 cosine similarity.Comment: Published at SLT202
- …