1,669 research outputs found

    Action Recognition in Videos: from Motion Capture Labs to the Web

    Full text link
    This paper presents a survey of human action recognition approaches based on visual data recorded from a single video camera. We propose an organizing framework which puts in evidence the evolution of the area, with techniques moving from heavily constrained motion capture scenarios towards more challenging, realistic, "in the wild" videos. The proposed organization is based on the representation used as input for the recognition task, emphasizing the hypothesis assumed and thus, the constraints imposed on the type of video that each technique is able to address. Expliciting the hypothesis and constraints makes the framework particularly useful to select a method, given an application. Another advantage of the proposed organization is that it allows categorizing newest approaches seamlessly with traditional ones, while providing an insightful perspective of the evolution of the action recognition task up to now. That perspective is the basis for the discussion in the end of the paper, where we also present the main open issues in the area.Comment: Preprint submitted to CVIU, survey paper, 46 pages, 2 figures, 4 table

    Detection of near-duplicates in large image collections

    Get PDF
    The vast numbers of images on the Web include many duplicates, and an even larger number of near-duplicate variants derived from the same original. These include thumbnails stored by search engines, copies shared by various news portals, and images that appear on multiple web sites, legitimately or otherwise. Such near-duplicates appear in the results of many web image searches, and constitute redundancy, and may also represent infringements of copyright. Digital images can be easily altered through simple digital manipulation such as conversion to grey-scale, colour balance change, rescaling, rotation, and cropping. Any of these operations defeat simple duplicate detection methods such as bit-level hashing. The ability to detect such variants with a reasonable degree of reliability and accuracy would support reduction of redundancy in collections and in presentation of search results, and also allow detection of possible copyright violations. Some existing methods for identifying near-duplicates are derived from computer vision techniques; these have shown high effectiveness for this domain, but are computationally expensive, and therefore impractical for large image collections. Other methods address the problem using conventional CBIR approaches that are more efficient but are typically not as robust. None of the previous methods have addressed the problem in its entirety, and none have addressed the large scale near-duplicate problem on the Web; there has been no analysis of the kinds of alterations that are common on the Web, nor any or evaluation of whether real cases of near-duplication can in fact be identified. In this thesis, we analyse the different types of alterations and near-duplicates existent in a range of popular web image searches, and establish a collection and evaluation ground truth using real-world near-duplicate examples. We present a simple ranking approach to reduce the number of local-descriptors, and therefore improve the efficiency of the descriptor-based retrieval method for near-duplicate detection. The descriptor-based method has been shown to produce near-perfect detection of near-duplicates, but was previously computationally very expensive. We show that while maintaining comparable effectiveness, our method scales well for large collections of hundreds of thousands of images. We also explore a more compact indexing structure to support near duplicate image detection. We develop a method to automatically detect the pair-wise near-duplicate relationship of images without the use of a query. We adapt the hash-based probabilistic counting method --- originally used for near-duplicate text document detection --- with the local descriptors; our adaptation offers the first effective and efficient non-query-based approach to this domain. We further incorporate our pair-wise detection approach for clustering of near-duplicates. We present a clustering method specifically for near-duplicate images, where our method is arguably the first clustering method to achieve a high level of effectiveness in this domain. We also show that near-duplicates within a large collection of a million images can be effectively clustered using our approach in less than an hour using relatively modest computational resources. Overall, our proposed methods provide practical approaches to the detection and management of near-duplicate images in large collection

    Detection and tracking of repeated sequences in videos

    Get PDF
    Ankara : The Department of Computer Engineering and the Institute of Engineering and Science of Bilkent University, 2007.Thesis (Master's) -- Bilkent University, 2007.Includes bibliographical references leaves 87-92.In this thesis, we propose a new method to search different instances of a video sequence inside a long video. The proposed method is robust to view point and illumination changes which may occur since the sequences are captured in different times with different cameras, and to the differences in the order and the number of frames in the sequences which may occur due to editing. The algorithm does not require any query to be given for searching, and finds all repeating video sequences inside a long video in a fully automatic way. First, the frames in a video are ranked according to their similarity on the distribution of salient points and colour values. Then, a tree based approach is used to seek for the repetitions of a video sequence if there is any. These repeating sequences are pruned for more accurate results in the last step. Results are provided on two full length feature movies, Run Lola Run and Groundhog Day, on commercials of TRECVID 2004 news video corpus and on dataset created for CIVR Copy Detection Showcase 2007. In these experiments, we obtain %93 precision values for CIVR2007 Copy Detection Showcase dataset and exceed %80 precision values for other sets.Can, TolgaM.S

    Object Duplicate Detection

    Get PDF
    With the technological evolution of digital acquisition and storage technologies, millions of images and video sequences are captured every day and shared in online services. One way of exploring this huge volume of images and videos is through searching a particular object depicted in images or videos by making use of object duplicate detection. Therefore, need of research on object duplicate detection is validated by several image and video retrieval applications, such as tag propagation, augmented reality, surveillance, mobile visual search, and television statistic measurement. Object duplicate detection is detecting visually same or very similar object to a query. Input is not restricted to an image, it can be several images from an object or even it can be a video. This dissertation describes the author's contribution to solve problems on object duplicate detection in computer vision. A novel graph-based approach is introduced for 2D and 3D object duplicate detection in still images. Graph model is used to represent the 3D spatial information of the object based on the local features extracted from training images so that an explicit and complex 3D object modeling is avoided. Therefore, improved performance can be achieved in comparison to existing methods in terms of both robustness and computational complexity. Our method is shown to be robust in detecting the same objects even when images containing the objects are taken from very different viewpoints or distances. Furthermore, we apply our object duplicate detection method to video, where the training images are added iteratively to the video sequence in order to compensate for 3D view variations, illumination changes and partial occlusions. Finally, we show several mobile applications for object duplicate detection, such as object recognition based museum guide, money recognition or flower recognition. General object duplicate detection may fail to detection chess figures, however considering context, like chess board position and height of the chess figure, detection can be more accurate. We show that user interaction further improves image retrieval compared to pure content-based methods through a game, called Epitome

    Design, implementation, and evaluation of scalable content-based image retrieval techniques.

    Get PDF
    Wong, Yuk Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 2007.Includes bibliographical references (leaves 95-100).Abstracts in English and Chinese.Abstract --- p.iiAcknowledgement --- p.vChapter 1 --- Introduction --- p.1Chapter 1.1 --- Overview --- p.1Chapter 1.2 --- Contribution --- p.3Chapter 1.3 --- Organization of This Work --- p.5Chapter 2 --- Literature Review --- p.6Chapter 2.1 --- Content-based Image Retrieval --- p.6Chapter 2.1.1 --- Query Technique --- p.6Chapter 2.1.2 --- Relevance Feedback --- p.7Chapter 2.1.3 --- Previously Proposed CBIR systems --- p.7Chapter 2.2 --- Invariant Local Feature --- p.8Chapter 2.3 --- Invariant Local Feature Detector --- p.9Chapter 2.3.1 --- Harris Corner Detector --- p.9Chapter 2.3.2 --- DOG Extrema Detector --- p.10Chapter 2.3.3 --- Harris-Laplacian Corner Detector --- p.13Chapter 2.3.4 --- Harris-Affine Covariant Detector --- p.14Chapter 2.4 --- Invariant Local Feature Descriptor --- p.15Chapter 2.4.1 --- Scale Invariant Feature Transform (SIFT) --- p.15Chapter 2.4.2 --- Shape Context --- p.17Chapter 2.4.3 --- PCA-SIFT --- p.18Chapter 2.4.4 --- Gradient Location and Orientation Histogram (GLOH) --- p.19Chapter 2.4.5 --- Geodesic-Intensity Histogram (GIH) --- p.19Chapter 2.4.6 --- Experiment --- p.21Chapter 2.5 --- Feature Matching --- p.27Chapter 2.5.1 --- Matching Criteria --- p.27Chapter 2.5.2 --- Distance Measures --- p.28Chapter 2.5.3 --- Searching Techniques --- p.29Chapter 3 --- A Distributed Scheme for Large-Scale CBIR --- p.31Chapter 3.1 --- Overview --- p.31Chapter 3.2 --- Related Work --- p.33Chapter 3.3 --- Scalable Content-Based Image Retrieval Scheme --- p.34Chapter 3.3.1 --- Overview of Our Solution --- p.34Chapter 3.3.2 --- Locality-Sensitive Hashing --- p.34Chapter 3.3.3 --- Scalable Indexing Solutions --- p.35Chapter 3.3.4 --- Disk-Based Multi-Partition Indexing --- p.36Chapter 3.3.5 --- Parallel Multi-Partition Indexing --- p.37Chapter 3.4 --- Feature Representation --- p.43Chapter 3.5 --- Empirical Evaluation --- p.44Chapter 3.5.1 --- Experimental Testbed --- p.44Chapter 3.5.2 --- Performance Evaluation Metrics --- p.44Chapter 3.5.3 --- Experimental Setup --- p.45Chapter 3.5.4 --- Experiment I: Disk-Based Multi-Partition Indexing Approach --- p.45Chapter 3.5.5 --- Experiment II: Parallel-Based Multi-Partition Indexing Approach --- p.48Chapter 3.6 --- Application to WWW Image Retrieval --- p.55Chapter 3.7 --- Summary --- p.55Chapter 4 --- Image Retrieval System for IND Detection --- p.60Chapter 4.1 --- Overview --- p.60Chapter 4.1.1 --- Motivation --- p.60Chapter 4.1.2 --- Related Work --- p.61Chapter 4.1.3 --- Objective --- p.62Chapter 4.1.4 --- Contribution --- p.63Chapter 4.2 --- Database Construction --- p.63Chapter 4.2.1 --- Image Representations --- p.63Chapter 4.2.2 --- Index Construction --- p.64Chapter 4.2.3 --- Keypoint and Image Lookup Tables --- p.67Chapter 4.3 --- Database Query --- p.67Chapter 4.3.1 --- Matching Strategies --- p.68Chapter 4.3.2 --- Verification Processes --- p.71Chapter 4.3.3 --- Image Voting --- p.75Chapter 4.4 --- Performance Evaluation --- p.76Chapter 4.4.1 --- Evaluation Metrics --- p.76Chapter 4.4.2 --- Results --- p.77Chapter 4.4.3 --- Summary --- p.81Chapter 5 --- Shape-SIFT Feature Descriptor --- p.82Chapter 5.1 --- Overview --- p.82Chapter 5.2 --- Related Work --- p.83Chapter 5.3 --- SHAPE-SIFT Descriptors --- p.84Chapter 5.3.1 --- Orientation assignment --- p.84Chapter 5.3.2 --- Canonical orientation determination --- p.84Chapter 5.3.3 --- Keypoint descriptor --- p.87Chapter 5.4 --- Performance Evaluation --- p.88Chapter 5.5 --- Summary --- p.90Chapter 6 --- Conclusions and Future Work --- p.92Chapter 6.1 --- Conclusions --- p.92Chapter 6.2 --- Future Work --- p.93Chapter A --- Publication --- p.94Bibliography --- p.9

    HIERARCHICAL LEARNING OF DISCRIMINATIVE FEATURES AND CLASSIFIERS FOR LARGE-SCALE VISUAL RECOGNITION

    Get PDF
    Enabling computers to recognize objects present in images has been a long standing but tremendously challenging problem in the field of computer vision for decades. Beyond the difficulties resulting from huge appearance variations, large-scale visual recognition poses unprecedented challenges when the number of visual categories being considered becomes thousands, and the amount of images increases to millions. This dissertation contributes to addressing a number of the challenging issues in large-scale visual recognition. First, we develop an automatic image-text alignment method to collect massive amounts of labeled images from the Web for training visual concept classifiers. Specif- ically, we first crawl a large number of cross-media Web pages containing Web images and their auxiliary texts, and then segment them into a collection of image-text pairs. We then show that near-duplicate image clustering according to visual similarity can significantly reduce the uncertainty on the relatedness of Web images’ semantics to their auxiliary text terms or phrases. Finally, we empirically demonstrate that ran- dom walk over a newly proposed phrase correlation network can help to achieve more precise image-text alignment by refining the relevance scores between Web images and their auxiliary text terms. Second, we propose a visual tree model to reduce the computational complexity of a large-scale visual recognition system by hierarchically organizing and learning the classifiers for a large number of visual categories in a tree structure. Compared to previous tree models, such as the label tree, our visual tree model does not require training a huge amount of classifiers in advance which is computationally expensive. However, we experimentally show that the proposed visual tree achieves results that are comparable or even better to other tree models in terms of recognition accuracy and efficiency. Third, we present a joint dictionary learning (JDL) algorithm which exploits the inter-category visual correlations to learn more discriminative dictionaries for image content representation. Given a group of visually correlated categories, JDL simul- taneously learns one common dictionary and multiple category-specific dictionaries to explicitly separate the shared visual atoms from the category-specific ones. We accordingly develop three classification schemes to make full use of the dictionaries learned by JDL for visual content representation in the task of image categoriza- tion. Experiments on two image data sets which respectively contain 17 and 1,000 categories demonstrate the effectiveness of the proposed algorithm. In the last part of the dissertation, we develop a novel data-driven algorithm to quantitatively characterize the semantic gaps of different visual concepts for learning complexity estimation and inference model selection. The semantic gaps are estimated directly in the visual feature space since the visual feature space is the common space for concept classifier training and automatic concept detection. We show that the quantitative characterization of the semantic gaps helps to automatically select more effective inference models for classifier training, which further improves the recognition accuracy rates

    3D Face Recognition

    Get PDF
    corecore