106 research outputs found
Linear Maximum Margin Classifier for Learning from Uncertain Data
In this paper, we propose a maximum margin classifier that deals with
uncertainty in data input. More specifically, we reformulate the SVM framework
such that each training example can be modeled by a multi-dimensional Gaussian
distribution described by its mean vector and its covariance matrix -- the
latter modeling the uncertainty. We address the classification problem and
define a cost function that is the expected value of the classical SVM cost
when data samples are drawn from the multi-dimensional Gaussian distributions
that form the set of the training examples. Our formulation approximates the
classical SVM formulation when the training examples are isotropic Gaussians
with variance tending to zero. We arrive at a convex optimization problem,
which we solve efficiently in the primal form using a stochastic gradient
descent approach. The resulting classifier, which we name SVM with Gaussian
Sample Uncertainty (SVM-GSU), is tested on synthetic data and five publicly
available and popular datasets; namely, the MNIST, WDBC, DEAP, TV News Channel
Commercial Detection, and TRECVID MED datasets. Experimental results verify the
effectiveness of the proposed method.Comment: IEEE Transactions on Pattern Analysis and Machine Intelligence. (c)
2017 IEEE. DOI: 10.1109/TPAMI.2017.2772235 Author's accepted version. The
final publication is available at
http://ieeexplore.ieee.org/document/8103808
Automatic Synchronization of Multi-User Photo Galleries
In this paper we address the issue of photo galleries synchronization, where
pictures related to the same event are collected by different users. Existing
solutions to address the problem are usually based on unrealistic assumptions,
like time consistency across photo galleries, and often heavily rely on
heuristics, limiting therefore the applicability to real-world scenarios. We
propose a solution that achieves better generalization performance for the
synchronization task compared to the available literature. The method is
characterized by three stages: at first, deep convolutional neural network
features are used to assess the visual similarity among the photos; then, pairs
of similar photos are detected across different galleries and used to construct
a graph; eventually, a probabilistic graphical model is used to estimate the
temporal offset of each pair of galleries, by traversing the minimum spanning
tree extracted from this graph. The experimental evaluation is conducted on
four publicly available datasets covering different types of events,
demonstrating the strength of our proposed method. A thorough discussion of the
obtained results is provided for a critical assessment of the quality in
synchronization.Comment: ACCEPTED to IEEE Transactions on Multimedi
Recommended from our members
A deep generic to specific recognition model for group membership analysis using non-verbal cues
Automatic understanding and analysis of groups has attracted increasing attention
in the vision and multimedia communities in recent years. However,
little attention has been paid to the automatic analysis of the non-verbal behaviors
and how this can be utilized for analysis of group membership, i.e.,
recognizing which group each individual is part of. This paper presents a
novel Support Vector Machine (SVM) based Deep Specific Recognition Model
(DeepSRM) that is learned based on a generic recognition model. The generic
recognition model refers to the model trained with data across different conditions,
i.e., when people are watching movies of different types. Although the
generic recognition model can provide a baseline for the recognition model
trained for each specific condition, the different behaviors people exhibit in
different conditions limit the recognition performance of the generic model.
Therefore, the specific recognition model is proposed for each condition separately
and built on the top of the generic recognition model. We conduct a set
of experiments using a database collected to study group analysis while each
group (i.e., four participants together) were watching a number of long movie
segments. The proposed deep specific recognition model (44%) outperforms the generic recognition model (26%). The recognition of group membership also indicates that the non-verbal behaviors of individuals within a group share commonalities
Video Summarization Using Deep Neural Networks: A Survey
Video summarization technologies aim to create a concise and complete
synopsis by selecting the most informative parts of the video content. Several
approaches have been developed over the last couple of decades and the current
state of the art is represented by methods that rely on modern deep neural
network architectures. This work focuses on the recent advances in the area and
provides a comprehensive survey of the existing deep-learning-based methods for
generic video summarization. After presenting the motivation behind the
development of technologies for video summarization, we formulate the video
summarization task and discuss the main characteristics of a typical
deep-learning-based analysis pipeline. Then, we suggest a taxonomy of the
existing algorithms and provide a systematic review of the relevant literature
that shows the evolution of the deep-learning-based video summarization
technologies and leads to suggestions for future developments. We then report
on protocols for the objective evaluation of video summarization algorithms and
we compare the performance of several deep-learning-based approaches. Based on
the outcomes of these comparisons, as well as some documented considerations
about the suitability of evaluation protocols, we indicate potential future
research directions.Comment: Journal paper; Under revie
AC-SUM-GAN: Connecting Actor-Critic and Generative Adversarial Networks for Unsupervised Video Summarization
This paper presents a new method for unsupervised video summarization. The proposed architecture embeds an Actor-Critic model into a Generative Adversarial Network and formulates the selection of important video fragments (that will be used to form the summary) as a sequence generation task. The Actor and the Critic take part in a game that incrementally leads to the selection of the video key-fragments, and their choices at each step of the game result in a set of rewards from the Discriminator. The designed training workflow allows the Actor and Critic to discover a space of actions and automatically learn a policy for key-fragment selection. Moreover, the introduced criterion for choosing the best model after the training ends, enables the automatic selection of proper values for parameters of the training process that are not learned from the data (such as the regularization factor Ï). Experimental evaluation on two benchmark datasets (SumMe and TVSum) demonstrates that the proposed AC-SUM-GAN model performs consistently well and gives SoA results in comparison to unsupervised methods, that are also competitive with respect to supervised methods
Video semantic content analysis framework based on ontology combined MPEG-7
The rapid increase in the available amount of video data is creating a growing demand for efficient methods for understanding and managing it at the semantic level. New multimedia standard, MPEG-7, provides the rich functionalities to enable the generation of audiovisual descriptions and is expressed solely in XML Schema which provides little support for expressing semantic knowledge. In this paper, a video semantic content analysis framework based on ontology combined MPEG-7 is presented. Domain
ontology is used to define high level semantic concepts and their relations in the context of the examined domain. MPEG-7 metadata terms of audiovisual descriptions and video content analysis algorithms are expressed in this ontology to enrich video semantic analysis. OWL is used for the ontology description. Rules in Description Logic are defined to describe how low-level features and algorithms for video analysis should be applied according to different perception content. Temporal Description Logic is used to describe the
semantic events, and a reasoning algorithm is proposed for events detection. The proposed framework is demonstrated in sports video domain and shows promising results
VideoAnalysis4ALL: An On-line Tool for the Automatic Fragmentation and Concept-based Annotation, and the Interactive Exploration of Videos.
This paper presents the VideoAnalysis4ALL tool that supports the automatic fragmentation and concept-based annotation of videos, and the exploration of the annotated video fragments through an interactive user interface. The developed web application decomposes the video into two different granularities, namely shots and scenes, and annotates each fragment by evaluating the existence of a number (several hundreds) of high-level visual concepts in the keyframes extracted from these fragments. Through the analysis the tool enables the identification and labeling of semantically coherent video fragments, while its user interfaces allow the discovery of these fragments with the help of human-interpretable concepts. The integrated state-of-the-art video analysis technologies perform very well and, by exploiting the processing capabilities of multi-thread / multi-core architectures, reduce the time required for analysis to approximately one third of the videoâs duration, thus making the analysis three times faster than real-time processing
A Stepwise, Label-based Approach for Improving the Adversarial Training in Unsupervised Video Summarization
In this paper we present our work on improving the efficiency of adversarial training for unsupervised video summarization. Our starting point is the SUM-GAN model, which creates a representative summary based on the intuition that such a summary should make it possible to reconstruct a video that is indistinguishable from the original one. We build on a publicly available implementation of a variation of this model, that includes a linear compression layer to reduce the number of learned parameters and applies an incremental approach for training the different components of the architecture. After assessing the impact of these changes to the modelâs performance, we propose a stepwise, label-based learning process to improve the training efficiency of the adversarial part of the model. Before evaluating our modelâs efficiency, we perform a thorough study with respect to the used evaluation protocols and we examine the possible performance on two benchmarking datasets, namely SumMe and TVSum. Experimental evaluations and comparisons with the state of the art highlight the competitiveness of the proposed method. An ablation study indicates the benefit of each applied change on the modelâs performance, and points out the advantageous role of the introduced stepwise, label-based training strategy on the learning efficiency of the adversarial part of the architecture
Texture Analysis and Radial Basis Function Approximation for IVUS Image Segmentation
>Intravascular ultrasound (IVUS) has become in the last years an important tool in both clinical and research applications. The detection of lumen and media-adventitia borders in IVUS images represents a first necessary step in the utilization of the IVUS data for the 3D reconstruction of human coronary arteries and the reliable quantitative assessment of the atherosclerotic lesions. To serve this goal, a fully automated technique for the detection of lumen and media-adventitia boundaries has been developed. This comprises two different steps for contour initialization, one for each corresponding contour of interest, based on the results of texture analysis, and a procedure for approximating the initialization results with smooth continuous curves. A multilevel Discrete Wavelet Frames decomposition is used for texture analysis, whereas Radial Basis Function approximation is employed for producing smooth contours. The proposed method shows promising results compared to a previous approach for texture-based IVUS image analysis
- âŠ