2,671 research outputs found
FaceFilter: Audio-visual speech separation using still images
The objective of this paper is to separate a target speaker's speech from a
mixture of two speakers using a deep audio-visual speech separation network.
Unlike previous works that used lip movement on video clips or pre-enrolled
speaker information as an auxiliary conditional feature, we use a single face
image of the target speaker. In this task, the conditional feature is obtained
from facial appearance in cross-modal biometric task, where audio and visual
identity representations are shared in latent space. Learnt identities from
facial images enforce the network to isolate matched speakers and extract the
voices from mixed speech. It solves the permutation problem caused by swapped
channel outputs, frequently occurred in speech separation tasks. The proposed
method is far more practical than video-based speech separation since user
profile images are readily available on many platforms. Also, unlike
speaker-aware separation methods, it is applicable on separation with unseen
speakers who have never been enrolled before. We show strong qualitative and
quantitative results on challenging real-world examples.Comment: Under submission as a conference paper. Video examples:
https://youtu.be/ku9xoLh62
TMac: Temporal Multi-Modal Graph Learning for Acoustic Event Classification
Audiovisual data is everywhere in this digital age, which raises higher
requirements for the deep learning models developed on them. To well handle the
information of the multi-modal data is the key to a better audiovisual modal.
We observe that these audiovisual data naturally have temporal attributes, such
as the time information for each frame in the video. More concretely, such data
is inherently multi-modal according to both audio and visual cues, which
proceed in a strict chronological order. It indicates that temporal information
is important in multi-modal acoustic event modeling for both intra- and
inter-modal. However, existing methods deal with each modal feature
independently and simply fuse them together, which neglects the mining of
temporal relation and thus leads to sub-optimal performance. With this
motivation, we propose a Temporal Multi-modal graph learning method for
Acoustic event Classification, called TMac, by modeling such temporal
information via graph learning techniques. In particular, we construct a
temporal graph for each acoustic event, dividing its audio data and video data
into multiple segments. Each segment can be considered as a node, and the
temporal relationships between nodes can be considered as timestamps on their
edges. In this case, we can smoothly capture the dynamic information in
intra-modal and inter-modal. Several experiments are conducted to demonstrate
TMac outperforms other SOTA models in performance. Our code is available at
https://github.com/MGitHubL/TMac.Comment: This work has been accepted by ACM MM 2023 for publicatio
Audio self-supervised learning: a survey
Inspired by the humans' cognitive ability to generalise knowledge and skills,
Self-Supervised Learning (SSL) targets at discovering general representations
from large-scale data without requiring human annotations, which is an
expensive and time consuming task. Its success in the fields of computer vision
and natural language processing have prompted its recent adoption into the
field of audio and speech processing. Comprehensive reviews summarising the
knowledge in audio SSL are currently missing. To fill this gap, in the present
work, we provide an overview of the SSL methods used for audio and speech
processing applications. Herein, we also summarise the empirical works that
exploit the audio modality in multi-modal SSL frameworks, and the existing
suitable benchmarks to evaluate the power of SSL in the computer audition
domain. Finally, we discuss some open problems and point out the future
directions on the development of audio SSL
- …