45,564 research outputs found
A Review of Audio Features and Statistical Models Exploited for Voice Pattern Design
Audio fingerprinting, also named as audio hashing, has been well-known as a
powerful technique to perform audio identification and synchronization. It
basically involves two major steps: fingerprint (voice pattern) design and
matching search. While the first step concerns the derivation of a robust and
compact audio signature, the second step usually requires knowledge about
database and quick-search algorithms. Though this technique offers a wide range
of real-world applications, to the best of the authors' knowledge, a
comprehensive survey of existing algorithms appeared more than eight years ago.
Thus, in this paper, we present a more up-to-date review and, for emphasizing
on the audio signal processing aspect, we focus our state-of-the-art survey on
the fingerprint design step for which various audio features and their
tractable statistical models are discussed.Comment: http://www.iaria.org/conferences2015/PATTERNS15.html ; Seventh
International Conferences on Pervasive Patterns and Applications (PATTERNS
2015), Mar 2015, Nice, Franc
Streaming Audio-Visual Speech Recognition with Alignment Regularization
Recognizing a word shortly after it is spoken is an important requirement for
automatic speech recognition (ASR) systems in real-world scenarios. As a
result, a large body of work on streaming audio-only ASR models has been
presented in the literature. However, streaming audio-visual automatic speech
recognition (AV-ASR) has received little attention in earlier works. In this
work, we propose a streaming AV-ASR system based on a hybrid connectionist
temporal classification (CTC)/attention neural network architecture. The audio
and the visual encoder neural networks are both based on the conformer
architecture, which is made streamable using chunk-wise self-attention (CSA)
and causal convolution. Streaming recognition with a decoder neural network is
realized by using the triggered attention technique, which performs
time-synchronous decoding with joint CTC/attention scoring. For frame-level ASR
criteria, such as CTC, a synchronized response from the audio and visual
encoders is critical for a joint AV decision making process. In this work, we
propose a novel alignment regularization technique that promotes
synchronization of the audio and visual encoder, which in turn results in
better word error rates (WERs) at all SNR levels for streaming and offline
AV-ASR models. The proposed AV-ASR model achieves WERs of 2.0% and 2.6% on the
Lip Reading Sentences 3 (LRS3) dataset in an offline and online setup,
respectively, which both present state-of-the-art results when no external
training data are used.Comment: Submitted to ICASSP202
Automatic Synchronization of Multi-User Photo Galleries
In this paper we address the issue of photo galleries synchronization, where
pictures related to the same event are collected by different users. Existing
solutions to address the problem are usually based on unrealistic assumptions,
like time consistency across photo galleries, and often heavily rely on
heuristics, limiting therefore the applicability to real-world scenarios. We
propose a solution that achieves better generalization performance for the
synchronization task compared to the available literature. The method is
characterized by three stages: at first, deep convolutional neural network
features are used to assess the visual similarity among the photos; then, pairs
of similar photos are detected across different galleries and used to construct
a graph; eventually, a probabilistic graphical model is used to estimate the
temporal offset of each pair of galleries, by traversing the minimum spanning
tree extracted from this graph. The experimental evaluation is conducted on
four publicly available datasets covering different types of events,
demonstrating the strength of our proposed method. A thorough discussion of the
obtained results is provided for a critical assessment of the quality in
synchronization.Comment: ACCEPTED to IEEE Transactions on Multimedi
Visual to Sound: Generating Natural Sound for Videos in the Wild
As two of the five traditional human senses (sight, hearing, taste, smell,
and touch), vision and sound are basic sources through which humans understand
the world. Often correlated during natural events, these two modalities combine
to jointly affect human perception. In this paper, we pose the task of
generating sound given visual input. Such capabilities could help enable
applications in virtual reality (generating sound for virtual scenes
automatically) or provide additional accessibility to images or videos for
people with visual impairments. As a first step in this direction, we apply
learning-based methods to generate raw waveform samples given input video
frames. We evaluate our models on a dataset of videos containing a variety of
sounds (such as ambient sounds and sounds from people/animals). Our experiments
show that the generated sounds are fairly realistic and have good temporal
synchronization with the visual inputs.Comment: Project page:
http://bvision11.cs.unc.edu/bigpen/yipin/visual2sound_webpage/visual2sound.htm
- …