1,066 research outputs found
Theory and Evaluation of a Bayesian Music Structure Extractor
We introduce a new model for extracting end points of music structure segments, such as intro, verse, chorus, break and so forth, from recorded music. Our methods are applied to the problem of grouping audio features into continuous structural segments with start and end times corresponding as closely as possible to a ground truth of independent human structure judgements. Our work extends previous work on automatic summarization and structure extraction by providing a model for segment end-points posed in a Bayesian framework. Methods to infer parameters to the model using Expectation Maximization and Maximum Likelihood methods are discussed. The model identifies all the segments in a song, not just the chorus or longest segment. We discuss the theory and implementation of the model and evaluate the model in an automatic structure segmentation experiment against a ground truth of human judgements. Our results shows a segment boundary intersection rate break-even point of approximately 80%
UPM-UC3M system for music and speech segmentation
This paper describes the UPM-UC3M system for the Albayzín evaluation 2010 on Audio Segmentation. This evaluation task consists of segmenting a broadcast news audio document into clean speech, music, speech with noise in background and speech with music in background. The UPM-UC3M system is based on Hidden Markov Models (HMMs), including a 3-state HMM for every acoustic class. The number of states and the number of Gaussian per state have been tuned for this evaluation. The main analysis during system development has been focused on feature selection. Also, two different architectures have been tested: the first one corresponds to an one-step system whereas the second one is a hierarchical system in which different features have been used for segmenting the different audio classes. For both systems, we have considered long term statistics of MFCC (Mel Frequency Ceptral Coefficients), spectral entropy and CHROMA coefficients. For the best configuration of the one-step system, we have obtained a 25.3% average error rate and 18.7% diarization error (using the NIST tool) and a 23.9% average error rate and 17.9% diarization error for the hierarchical one
Adversarial Training Towards Robust Multimedia Recommender System
With the prevalence of multimedia content on the Web, developing recommender
solutions that can effectively leverage the rich signal in multimedia data is
in urgent need. Owing to the success of deep neural networks in representation
learning, recent advance on multimedia recommendation has largely focused on
exploring deep learning methods to improve the recommendation accuracy. To
date, however, there has been little effort to investigate the robustness of
multimedia representation and its impact on the performance of multimedia
recommendation.
In this paper, we shed light on the robustness of multimedia recommender
system. Using the state-of-the-art recommendation framework and deep image
features, we demonstrate that the overall system is not robust, such that a
small (but purposeful) perturbation on the input image will severely decrease
the recommendation accuracy. This implies the possible weakness of multimedia
recommender system in predicting user preference, and more importantly, the
potential of improvement by enhancing its robustness. To this end, we propose a
novel solution named Adversarial Multimedia Recommendation (AMR), which can
lead to a more robust multimedia recommender model by using adversarial
learning. The idea is to train the model to defend an adversary, which adds
perturbations to the target image with the purpose of decreasing the model's
accuracy. We conduct experiments on two representative multimedia
recommendation tasks, namely, image recommendation and visually-aware product
recommendation. Extensive results verify the positive effect of adversarial
learning and demonstrate the effectiveness of our AMR method. Source codes are
available in https://github.com/duxy-me/AMR.Comment: TKD
I4U System Description for NIST SRE'20 CTS Challenge
This manuscript describes the I4U submission to the 2020 NIST Speaker
Recognition Evaluation (SRE'20) Conversational Telephone Speech (CTS)
Challenge. The I4U's submission was resulted from active collaboration among
researchers across eight research teams - IR (Singapore), UEF (Finland),
VALPT (Italy, Spain), NEC (Japan), THUEE (China), LIA (France), NUS
(Singapore), INRIA (France) and TJU (China). The submission was based on the
fusion of top performing sub-systems and sub-fusion systems contributed by
individual teams. Efforts have been spent on the use of common development and
validation sets, submission schedule and milestone, minimizing inconsistency in
trial list and score file format across sites.Comment: SRE 2021, NIST Speaker Recognition Evaluation Workshop, CTS Speaker
Recognition Challenge, 14-12 December 202
A COMPUTATION METHOD/FRAMEWORK FOR HIGH LEVEL VIDEO CONTENT ANALYSIS AND SEGMENTATION USING AFFECTIVE LEVEL INFORMATION
VIDEO segmentation facilitates e±cient video indexing and navigation in large
digital video archives. It is an important process in a content-based video
indexing and retrieval (CBVIR) system. Many automated solutions performed seg-
mentation by utilizing information about the \facts" of the video. These \facts"
come in the form of labels that describe the objects which are captured by the cam-
era. This type of solutions was able to achieve good and consistent results for some
video genres such as news programs and informational presentations. The content
format of this type of videos is generally quite standard, and automated solutions
were designed to follow these format rules. For example in [1], the presence of news
anchor persons was used as a cue to determine the start and end of a meaningful
news segment.
The same cannot be said for video genres such as movies and feature films.
This is because makers of this type of videos utilized different filming techniques to
design their videos in order to elicit certain affective response from their targeted
audience. Humans usually perform manual video segmentation by trying to relate
changes in time and locale to discontinuities in meaning [2]. As a result, viewers
usually have doubts about the boundary locations of a meaningful video segment
due to their different affective responses.
This thesis presents an entirely new view to the problem of high level video
segmentation. We developed a novel probabilistic method for affective level video
content analysis and segmentation. Our method had two stages. In the first stage,
a®ective content labels were assigned to video shots by means of a dynamic bayesian
0. Abstract 3
network (DBN). A novel hierarchical-coupled dynamic bayesian network (HCDBN)
topology was proposed for this stage. The topology was based on the pleasure-
arousal-dominance (P-A-D) model of a®ect representation [3]. In principle, this
model can represent a large number of emotions. In the second stage, the visual,
audio and a®ective information of the video was used to compute a statistical feature
vector to represent the content of each shot. Affective level video segmentation was
achieved by applying spectral clustering to the feature vectors.
We evaluated the first stage of our proposal by comparing its emotion detec-
tion ability with all the existing works which are related to the field of a®ective video
content analysis. To evaluate the second stage, we used the time adaptive clustering
(TAC) algorithm as our performance benchmark. The TAC algorithm was the best
high level video segmentation method [2]. However, it is a very computationally
intensive algorithm. To accelerate its computation speed, we developed a modified
TAC (modTAC) algorithm which was designed to be mapped easily onto a field
programmable gate array (FPGA) device. Both the TAC and modTAC algorithms
were used as performance benchmarks for our proposed method.
Since affective video content is a perceptual concept, the segmentation per-
formance and human agreement rates were used as our evaluation criteria. To obtain
our ground truth data and viewer agreement rates, a pilot panel study which was
based on the work of Gross et al. [4] was conducted. Experiment results will show
the feasibility of our proposed method. For the first stage of our proposal, our
experiment results will show that an average improvement of as high as 38% was
achieved over previous works. As for the second stage, an improvement of as high
as 37% was achieved over the TAC algorithm
An Experimental Review of Speaker Diarization methods with application to Two-Speaker Conversational Telephone Speech recordings
We performed an experimental review of current diarization systems for the
conversational telephone speech (CTS) domain. In detail, we considered a total
of eight different algorithms belonging to clustering-based, end-to-end neural
diarization (EEND), and speech separation guided diarization (SSGD) paradigms.
We studied the inference-time computational requirements and diarization
accuracy on four CTS datasets with different characteristics and languages. We
found that, among all methods considered, EEND-vector clustering (EEND-VC)
offers the best trade-off in terms of computing requirements and performance.
More in general, EEND models have been found to be lighter and faster in
inference compared to clustering-based methods. However, they also require a
large amount of diarization-oriented annotated data. In particular EEND-VC
performance in our experiments degraded when the dataset size was reduced,
whereas self-attentive EEND (SA-EEND) was less affected. We also found that
SA-EEND gives less consistent results among all the datasets compared to
EEND-VC, with its performance degrading on long conversations with high speech
sparsity. Clustering-based diarization systems, and in particular VBx, instead
have more consistent performance compared to SA-EEND but are outperformed by
EEND-VC. The gap with respect to this latter is reduced when overlap-aware
clustering methods are considered. SSGD is the most computationally demanding
method, but it could be convenient if speech recognition has to be performed.
Its performance is close to SA-EEND but degrades significantly when the
training and inference data characteristics are less matched.Comment: 52 pages, 10 figure
- …