20 research outputs found

    Efficient and Robust Detection of Duplicate Videos in a Database

    Get PDF
    In this paper, the duplicate detection method is to retrieve the best matching model video for a given query video using fingerprint. We have used the Color Layout Descriptor method and Opponent Color Space to extract feature from frame and perform k-means based clustering to generate fingerprints which are further encoded by Vector Quantization. The model-to-query video distance is computed using a new distance measure to find the similarity. To perform efficient search coarse-to-fine matching scheme is used to retrieve best match. We perform experiments on query videos and real time video with an average duration of 60 sec; the duplicate video is detected with high similarity

    Multiple Hashing Integration for Real-Time Large Scale Part-to-Part Video Matching

    Full text link
    A real-time large scale part-to-part video matching algorithm, based on the cross correlation of the intensity of motion curves, is proposed with a view to originality recognition, video database cleansing, copyright enforcement, video tagging or video result re-ranking. Moreover, it is suggested how the most representative hashes and distance functions - strada, discrete cosine transformation, Marr-Hildreth and radial - should be integrated in order for the matching algorithm to be invariant against blur, compression and rotation distortions: (R; _) 2 [1; 20]_[1; 8], from 512_512 to 32_32pixels2 and from 10 to 180_. The DCT hash is invariant against blur and compression up to 64x64 pixels2. Nevertheless, although its performance against rotation is the best, with a success up to 70%, it should be combined with the Marr-Hildreth distance function. With the latter, the image selected by the DCT hash should be at a distance lower than 1.15 times the Marr-Hildreth minimum distance

    Analisis Interaksi Pengguna di Media Sosial Dalam Mencegah Video Hoax dan Model Arsitektur Deteksi Tingkat Tinggi

    Get PDF
    Penyebaran berita hoaks dengan konten video yang berulang pada media sosial merupakan fenomenayang sangat luar biasa dan muncul bukan hanya pada kalangan pengguna dewasa saja namun sudah kesegalalapisan usia, Efek yang paling terasa adalah timbulnya perpecahan di masyarakat karena penggunaan videoyang sudah pernah tayang atau ada sebelumnya menjadi bukti kuat untuk memvalidasi konten yang dilihatnya.Penting untuk mendeteksi berita hoaks dengan konten video yang berulang dan menghentikan efek negatifnyapada individu dan masyarakat. Pada penelitian ini pembuatan model arsitektur deteksi tingkat tinggi untuksistem analisis berita hoaks dengan konten video yang digunakan kembali atau berulang pada media sosial dikenalkan, dengan menggunakan deep learning video processing, speech to text dan beberapa fitur content-baseddan context-based rancangan model arsitektur ini dibuat. Konten hoaks dengan video yang berulang diharapkandapat dicegah penyebarannya jika bisa di filter terlebih dahulu sebelum muncul di lini masa. Diharapkan modelarsitektur ini dapat menjadi referensi untuk di buat menjadi real syste

    Circulant temporal encoding for video retrieval and temporal alignment

    Get PDF
    We address the problem of specific video event retrieval. Given a query video of a specific event, e.g., a concert of Madonna, the goal is to retrieve other videos of the same event that temporally overlap with the query. Our approach encodes the frame descriptors of a video to jointly represent their appearance and temporal order. It exploits the properties of circulant matrices to efficiently compare the videos in the frequency domain. This offers a significant gain in complexity and accurately localizes the matching parts of videos. The descriptors can be compressed in the frequency domain with a product quantizer adapted to complex numbers. In this case, video retrieval is performed without decompressing the descriptors. We also consider the temporal alignment of a set of videos. We exploit the matching confidence and an estimate of the temporal offset computed for all pairs of videos by our retrieval approach. Our robust algorithm aligns the videos on a global timeline by maximizing the set of temporally consistent matches. The global temporal alignment enables synchronous playback of the videos of a given scene

    Detection of Near-Duplicates Using Error-Correcting Codes

    Get PDF
    Sarnaste elementide tuvastamine suurest hulgast on probleem, mida esineb erinevates valdkondades. See töö konstrueerib ja analüüsib kahte algoritmi sisendhulgast sarnaste paaride leidmiseks. Näidatakse, et need algoritmid on sarnaste DNA järjestuste leidmiseks rakendatavad ja efektiivsed.The detection of near-duplicate items from a large set is a problem faced in many fields. This paper constructs and analyses two algorithms for finding similar pairs from an input dataset. It shows that these algorithms are applicable and efficient in the domain of DNA sequences

    Semi-supervised 3D Video Information Retrieval with Deep Neural Network and Bi-directional Dynamic-time Warping Algorithm

    Full text link
    This paper presents a novel semi-supervised deep learning algorithm for retrieving similar 2D and 3D videos based on visual content. The proposed approach combines the power of deep convolutional and recurrent neural networks with dynamic time warping as a similarity measure. The proposed algorithm is designed to handle large video datasets and retrieve the most related videos to a given inquiry video clip based on its graphical frames and contents. We split both the candidate and the inquiry videos into a sequence of clips and convert each clip to a representation vector using an autoencoder-backed deep neural network. We then calculate a similarity measure between the sequences of embedding vectors using a bi-directional dynamic time-warping method. This approach is tested on multiple public datasets, including CC\_WEB\_VIDEO, Youtube-8m, S3DIS, and Synthia, and showed good results compared to state-of-the-art. The algorithm effectively solves video retrieval tasks and outperforms the benchmarked state-of-the-art deep learning model.Comment: 10 pages, submitted to IEEE Conference Big Data 202

    Integrating Web Videos for Faceted Search Based on Duplicates, Contexts and Rules

    Full text link

    Video copy detection by fast sequence matching

    Get PDF
    ABSTRACT Sequence matching techniques are effective for comparing two videos. However, existing approaches suffer from demanding computational costs and thus are not scalable for large-scale applications. In this paper we view video copy detection as a local alignment problem between two frame sequences and propose a two-level filtration approach which achieves significant acceleration to the matching process. First, we propose to use an adaptive vocabulary tree to index all frame descriptors extracted from the video database. In this step, each video is treated as a "bag of frames." Such an indexing structure not only provides a rich vocabulary for representing videos, but also enables efficient computation of a pyramid matching kernel between videos. This vocabulary tree filters those videos that are dissimilar to the query based on their histogram pyramid representations. Second, we propose a fast edit-distance-based sequence matching method that avoids unnecessary comparisons between dissimilar frame pairs. This step reduces the quadratic runtime to a linear time with respect to the lengths of the sequences under comparison. Experiments on the MUSCLE VCD benchmark demonstrate that our approach is effective and efficient. It is 18X faster than the original sequence matching algorithms. This technique can be applied to several other visual retrieval tasks including shape retrieval. We demonstrate that the proposed method can also achieve a significant speedup for the shape retrieval task on the MPEG-7 shape dataset
    corecore