69,766 research outputs found
A similarity measure between videos using alignment, graphical and speech features
A novel video similarity measure is proposed by using visual features, alignment distances and speech transcripts. First, video files are represented by a sequence of segments each of which contains colour histograms, starting time, and a set of phonemes. After, textual, alignment and visual features are extracted of these segments. The following step, bipartite matching and statistical features are applied to find correspondences between segments. Finally, a similarity is calculated between videos. Experiments have been carried out and promising results have been obtained.Ministerio de Ciencia e Innovación TIN2009–14378-C02–0
TempCLR: Temporal Alignment Representation with Contrastive Learning
Video representation learning has been successful in video-text pre-training
for zero-shot transfer, where each sentence is trained to be close to the
paired video clips in a common feature space. For long videos, given a
paragraph of description where the sentences describe different segments of the
video, by matching all sentence-clip pairs, the paragraph and the full video
are aligned implicitly. However, such unit-level similarity measure may ignore
the global temporal context over a long time span, which inevitably limits the
generalization ability. In this paper, we propose a contrastive learning
framework TempCLR to compare the full video and the paragraph explicitly. As
the video/paragraph is formulated as a sequence of clips/sentences, under the
constraint of their temporal order, we use dynamic time warping to compute the
minimum cumulative cost over sentence-clip pairs as the sequence-level
distance. To explore the temporal dynamics, we break the consistency of
temporal order by shuffling the video clips or sentences according to the
temporal granularity. In this way, we obtain the representations for
clips/sentences, which perceive the temporal information and thus facilitate
the sequence alignment. In addition to pre-training on the video and paragraph,
our approach can also generalize on the matching between different video
instances. We evaluate our approach on video retrieval, action step
localization, and few-shot action recognition, and achieve consistent
performance gain over all three tasks. Detailed ablation studies are provided
to justify the approach design
Video matching using DC-image and local features
This paper presents a suggested framework for video matching based on local features extracted from the DCimage of MPEG compressed videos, without decompression. The relevant arguments and supporting evidences are discussed for developing video similarity techniques that works directly on compressed videos, without decompression, and especially utilising small size images. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and the corresponding computation complexity. The second experiment compares between using local features and global features in video matching, especially in the compressed domain and with the small size images. The results confirmed that the use of DC-image, despite its highly reduced size, is promising as it produces at least similar (if not better) matching precision, compared to the full I-frame. Also, using SIFT, as a local feature, outperforms precision of most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the realtime margin. There are also various optimisations that can be done to improve this computation complexity
DC-image for real time compressed video matching
This chapter presents a suggested framework for video matching based on local features extracted from the DC-image of MPEG compressed videos, without full decompression. In addition, the relevant arguments and supporting evidences are discussed. Several local feature detectors will be examined to select the best for matching using the DC-image. Two experiments are carried to support the above. The first is comparing between the DC-image and I-frame, in terms of matching performance and computation complexity. The second experiment compares between using local features and global features regarding compressed video matching with respect to the DC-image. The results confirmed that the use of DC-image, despite its highly reduced size, it is promising as it produces higher matching precision, compared to the full I-frame. Also, SIFT, as a local feature, outperforms most of the standard global features. On the other hand, its computation complexity is relatively higher, but it is still within the real-time margin which leaves a space for further optimizations that can be done to improve this computation complexity
A Neural Multi-sequence Alignment TeCHnique (NeuMATCH)
The alignment of heterogeneous sequential data (video to text) is an
important and challenging problem. Standard techniques for this task, including
Dynamic Time Warping (DTW) and Conditional Random Fields (CRFs), suffer from
inherent drawbacks. Mainly, the Markov assumption implies that, given the
immediate past, future alignment decisions are independent of further history.
The separation between similarity computation and alignment decision also
prevents end-to-end training. In this paper, we propose an end-to-end neural
architecture where alignment actions are implemented as moving data between
stacks of Long Short-term Memory (LSTM) blocks. This flexible architecture
supports a large variety of alignment tasks, including one-to-one, one-to-many,
skipping unmatched elements, and (with extensions) non-monotonic alignment.
Extensive experiments on semi-synthetic and real datasets show that our
algorithm outperforms state-of-the-art baselines.Comment: Accepted at CVPR 2018 (Spotlight). arXiv file includes the paper and
the supplemental materia
- …