2 research outputs found

    A Deterministic Eviction Model for Removing Redundancies in Video Corpus

    Get PDF
    The traditional storage approaches are being challenged by huge data volumes. In multimedia content, every file does not necessarily get tagged as an exact duplicate; rather they are prone to editing and resulting in similar copies of the same file. This paper proposes the similarity-based deduplication approach to evict similar duplicates from the archive storage, which compares the samples of binary hashes to identify the duplicates. This eviction is done by initially dividing the query video into dynamic key frames based on the video length. Binary hash codes of these frames are then compared with existing key frames to identify the differences. The similarity score is determined based on these differences, which decides the eradication strategy of duplicate copy. Duplicate elimination goes through two levels, namely removal of exact duplicates and similar duplicates. The proposed approach has shortened the comparison window by comparing only the candidate hash codes based on the dynamic key frames and aims the accurate lossless duplicate removals. The presented work is executed and tested on the produced synthetic video dataset. Results show the reduction in redundant data and increase in the storage space. Binary hashes and similarity scores contributed to achieving good deduplication ratio and overall performance

    A compressed-domain robust descriptor for near duplicate video copy detection

    No full text
    This paper introduces a global descriptor from the compressed video domain (H.264) for near duplicate video copy detection tasks. The proposed descriptor uses a spatial-temporal feature structure in an ordinal pattern distribution format. The proposed descriptor is constructed from Intra-Prediction Modes (IPM) of key frames (IDR & I slices) and extracted from the compressed video files, using the MPEG4/AVC (H.264) codec. Intra-prediction is the compression technique used in the key frames of the H.264 codec. As the proposed feature describes pictures globally, this research compares the feature with the two other well-known global image descriptors, ordinal intensity/colour Histograms and ordinal Auto-correlograms, as baselines. Our experiments show how the proposed feature outperforms the baseline features in non-geometric transformations T3, T4 and T5 in effectiveness as well as efficiency. It is due to better representation of the image content and smaller feature vector size. The core competency of the proposed feature is in non-linear brightness and contrast changes (Gamma expansion and compression) in which the intensity/colour Histograms and Auto-correlograms are deficient
    corecore