1,191 research outputs found

    Copy Detection

    Get PDF
    Tato práce popisuje metody vyhledávání podle obsahu. Vybranou metodu dále aplikuje na vyhledávání podobných obrázků a snímků ve videu. Práce dále kvalifikuje metody používané na zrychlení tohoto typu vyhledávání. Metody dále vyhodnocuje, diskutuje výsledky a zároveň navrhuje jejich používaná vylepšení.This thesis describes content-search methods. Chosen method is applied on finding similar images and video frames. Thesis also qualifies methods used for acceleration of this search. Methods \ are then evauated and their results discussed. Proposal of commonly used enhancements for these methods is made.

    TRECVID 2008 - goals, tasks, data, evaluation mechanisms and metrics

    Get PDF
    The TREC Video Retrieval Evaluation (TRECVID) 2008 is a TREC-style video analysis and retrieval evaluation, the goal of which remains to promote progress in content-based exploitation of digital video via open, metrics-based evaluation. Over the last 7 years this effort has yielded a better understanding of how systems can effectively accomplish such processing and how one can reliably benchmark their performance. In 2008, 77 teams (see Table 1) from various research organizations --- 24 from Asia, 39 from Europe, 13 from North America, and 1 from Australia --- participated in one or more of five tasks: high-level feature extraction, search (fully automatic, manually assisted, or interactive), pre-production video (rushes) summarization, copy detection, or surveillance event detection. The copy detection and surveillance event detection tasks are being run for the first time in TRECVID. This paper presents an overview of TRECVid in 2008

    Video Copy Detection Utilizing Log-Polar Transformation

    Get PDF
    Video Copy Detection is the process of comparing two videos to determine their similarity and determine if they are copies. This thesis enhances some of the common algorithms used in Video Copy Detection by utilizing the Log-Polar transformation as a pre-processing step. This pre-processing step is expected to increase speed of the overall Video Copy Detection process while maintaining the accuracy of the algorithms. The results of this research show that the addition of a Log-Polar pre-processing step reduces the computation time of the overall Video Copy Detection process. The additional time necessary to perform the Log-Polar pre-processing step is outweighed by the overall reduction in computation time. The accuracy and recall are slightly affected by the addition of this pre-processing step. The results also show that the video frame size can be significantly compressed with minimal effect to the algorithm\u27s overall performance

    Efficient image copy detection using multi-scale fingerprints

    Get PDF
    Inspired by multi-resolution histogram, we propose a multi-scale SIFT descriptor to improve the discriminability. A series of SIFT descriptions with different scale are first acquired by varying the actual size of each spatial bin. Then principle component analysis (PCA) is employed to reduce them to low dimensional vectors, which are further combined into one 128-dimension multi-scale SIFT description. Next, an entropy maximization based binarization is employed to encode the descriptions into binary codes called fingerprints for indexing the local features. Furthermore, an efficient search architecture consisting of lookup tables and inverted image ID list is designed to improve the query speed. Since the fingerprint building is of low-complexity, this method is very efficient and scalable to very large databases. In addition, the multi-scale fingerprints are very discriminative such that the copies can be effectively distinguished from similar objects, which leads to an improved performance in the detection of copies. The experimental evaluation shows that our approach outperforms the state of the art methods.Inspired by multi-resolution histogram, we propose a multi-scale SIFT descriptor to improve the discriminability. A series of SIFT descriptions with different scale are first acquired by varying the actual size of each spatial bin. Then principle component analysis (PCA) is employed to reduce them to low dimensional vectors, which are further combined into one 128-dimension multi-scale SIFT description. Next, an entropy maximization based binarization is employed to encode the descriptions into binary codes called fingerprints for indexing the local features. Furthermore, an efficient search architecture consisting of lookup tables and inverted image ID list is designed to improve the query speed. Since the fingerprint building is of low-complexity, this method is very efficient and scalable to very large databases. In addition, the multi-scale fingerprints are very discriminative such that the copies can be effectively distinguished from similar objects, which leads to an improved performance in the detection of copies. The experimental evaluation shows that our approach outperforms the state of the art methods

    Identification versus CBCD: a comparison of different evaluation techniques

    Get PDF
    Fingerprint techniques have a significant advantage in respect of watermarking: a fingerprint can be extracted in each moment of the lifetime of a multimedia content. This aspect is fundamental to solve the problem of copy detection mainly because many copies can be available in huge amount of data in circulation and because each copy can be attacked in several ways (compression, re-encoding, text-overlay, etc.). In this paper the problem of copy detection is studied and tested from two different point of views: content based and identification approaches. The results show that the proposed system is quite robust to some copy modifications and most of all show that the overall results depend on the evaluation method used for testing
    corecore