855 research outputs found

    Video Shot Boundary Detection Using Generalized Eigenvalue Decomposition and Gaussian Transition Detection

    Get PDF
    Shot boundary detection is the first step of the video analysis, summarization and retrieval. In this paper, we propose a novel shot boundary detection algorithm using Generalized Eigenvalue Decomposition (GED) and modeling of gradual transitions by Gaussian functions. Especially, we focus on the challenges of detecting the gradual shots and extracting appropriate spatio-temporal features, which have effects on the ability of algorithm to detect shot boundaries efficiently. We derive a theorem that discuss about some new features of GED which could be used in the video processing algorithms. Our innovative explanation utilizes this theorem in the defining of new distance metric in Eigen space for comparing video frames. The distance function has abrupt changes in hard cut transitions and semi-Gaussian behavior in gradual transitions. The algorithm detects the transitions by analyzing this distance function. Finally we report the experimental results using large-scale test sets provided by the TRECVID 2006 which has evaluations for hard cut and gradual shot boundary detection

    Correlation Coefficients and Adaptive Threshold-Based Dissolve Detection in High-Quality Videos

    Get PDF
    Rapid enhancements in Multimedia tools and features day per day have made entertainment amazing and the quality visual effects have attracted every individual to watch these days\u27 videos. The fast-changing scenes, light effects, and undistinguishable blending of diverse frames have created challenges for researchers in detecting gradual transitions. The proposed work concentrates to detect gradual transitions in videos using correlation coefficients obtained using color histograms and an adaptive thresholding mechanism. Other gradual transitions including fade out, fade in, and cuts are eliminated successfully, and dissolves are then detected from the acquired video frames. The characteristics of the normalized correlation coefficient are studied carefully and dissolve are extracted simply with low computational and time complexity. The confusion between fade in/out and dissolves is discriminated against using the adaptive threshold and the absence of spikes is not part of the case of dissolves. The experimental results obtained over 14 videos involving lightning effects and rapid object motions from Indian film songs accurately detected 22 out of 25 gradual transitions while falsely detecting one transition. The performance of the proposed scheme over four benchmark videos of the TRECVID 2001 dataset obtained 91.6, 94.33, and 92.03 values for precision, recall, and F-measure respectively

    Deliverable D1.4 Visual, text and audio information analysis for hypervideo, final release

    Get PDF
    Having extensively evaluated the performance of the technologies included in the first release of WP1 multimedia analysis tools, using content from the LinkedTV scenarios and by participating in international benchmarking activities, concrete decisions regarding the appropriateness and the importance of each individual method or combination of methods were made, which, combined with an updated list of information needs for each scenario, led to a new set of analysis requirements that had to be addressed through the release of the final set of analysis techniques of WP1. To this end, coordinated efforts on three directions, including (a) the improvement of a number of methods in terms of accuracy and time efficiency, (b) the development of new technologies and (c) the definition of synergies between methods for obtaining new types of information via multimodal processing, resulted in the final bunch of multimedia analysis methods for video hyperlinking. Moreover, the different developed analysis modules have been integrated into a web-based infrastructure, allowing the fully automatic linking of the multitude of WP1 technologies and the overall LinkedTV platform

    Informative Data Fusion: Beyond Canonical Correlation Analysis

    Full text link
    Multi-modal data fusion is a challenging but common problem arising in fields such as economics, statistical signal processing, medical imaging, and machine learning. In such applications, we have access to multiple datasets that use different data modalities to describe some system feature. Canonical correlation analysis (CCA) is a multidimensional joint dimensionality reduction algorithm for exactly two datasets. CCA finds a linear transformation for each feature vector set such that the correlation between the two transformed feature sets is maximized. These linear transformations are easily found by solving the SVD of a matrix that only involves the covariance and cross-covariance matrices of the feature vector sets. When these covariance matrices are unknown, an empirical version of CCA substitutes sample covariance estimates formed from training data. However, when the number of training samples is less than the combined dimension of the datasets, CCA fails to reliably detect correlation between the datasets. This thesis explores the the problem of detecting correlations from data modeled by the ubiquitous signal-plus noise data model. We present a modification to CCA, which we call informative CCA (ICCA) that first projects each dataset onto a low-dimensional informative signal subspace. We verify the superior performance of ICCA on real-world datasets and argue the optimality of trim-then-fuse over fuse-then-trim correlation analysis strategies. We provide a significance test for the correlations returned by ICCA and derive improved estimates of the population canonical vectors using insights from random matrix theory. We then extend the analysis of CCA to regularized CCA (RCCA) and demonstrate that setting the regularization parameter to infinity results in the best performance and has the same solution as taking the SVD of the cross-covariance matrix of the two datasets. Finally, we apply the ideas learned from ICCA to multiset CCA (MCCA), which analyzes correlations for more than two datasets. There are multiple formulations of multiset CCA (MCCA), each using a different combination of objective function and constraint function to describe a notion of multiset correlation. We consider MAXVAR, provide an informative version of the algorithm, which we call informative MCCA (IMCCA), and demonstrate its superiority on a real-world dataset.PHDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113419/1/asendorf_1.pd

    SHOT SEGMENTATION FOR CONTENT BASED VIDEO RETRIEVAL

    Get PDF
    Content based video retrieval is a technique used to search and browse large collections of videos stored in a database. This technique has proven to be useful for numerous applications spreadacross different domains like, surveillance, security, biomedicine, and traffic regulation. Here, the analysis is carried out based on certain properties extracted from the video frames such as colour, edge, motion, and texture. Instead of storing the features of every single frame of the video, only the features of the representative frames,which describethe entire video, are stored. This results in better storage memory utilization. We propose a method which aims at efficiently segmenting the video into shots and selecting the key frames of each shot accordingly. In order to determine the shot boundaries, we have incorporated Colour Histogram and Background Subtraction methods in this paper.The analysis of the proposed technique is carried out for different videos

    Segmentation and Classification of Multimodal Imagery

    Get PDF
    Segmentation and classification are two important computer vision tasks that transform input data into a compact representation that allow fast and efficient analysis. Several challenges exist in generating accurate segmentation or classification results. In a video, for example, objects often change the appearance and are partially occluded, making it difficult to delineate the object from its surroundings. This thesis proposes video segmentation and aerial image classification algorithms to address some of the problems and provide accurate results. We developed a gradient driven three-dimensional segmentation technique that partitions a video into spatiotemporal objects. The algorithm utilizes the local gradient computed at each pixel location together with the global boundary map acquired through deep learning methods to generate initial pixel groups by traversing from low to high gradient regions. A local clustering method is then employed to refine these initial pixel groups. The refined sub-volumes in the homogeneous regions of video are selected as initial seeds and iteratively combined with adjacent groups based on intensity similarities. The volume growth is terminated at the color boundaries of the video. The over-segments obtained from the above steps are then merged hierarchically by a multivariate approach yielding a final segmentation map for each frame. In addition, we also implemented a streaming version of the above algorithm that requires a lower computational memory. The results illustrate that our proposed methodology compares favorably well, on a qualitative and quantitative level, in segmentation quality and computational efficiency with the latest state of the art techniques. We also developed a convolutional neural network (CNN)-based method to efficiently combine information from multisensor remotely sensed images for pixel-wise semantic classification. The CNN features obtained from multiple spectral bands are fused at the initial layers of deep neural networks as opposed to final layers. The early fusion architecture has fewer parameters and thereby reduces the computational time and GPU memory during training and inference. We also introduce a composite architecture that fuses features throughout the network. The methods were validated on four different datasets: ISPRS Potsdam, Vaihingen, IEEE Zeebruges, and Sentinel-1, Sentinel-2 dataset. For the Sentinel-1,-2 datasets, we obtain the ground truth labels for three classes from OpenStreetMap. Results on all the images show early fusion, specifically after layer three of the network, achieves results similar to or better than a decision level fusion mechanism. The performance of the proposed architecture is also on par with the state-of-the-art results

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF
    • …
    corecore