833 research outputs found
Twofold Video Hashing with Automatic Synchronization
Video hashing finds a wide array of applications in content authentication,
robust retrieval and anti-piracy search. While much of the existing research
has focused on extracting robust and secure content descriptors, a significant
open challenge still remains: Most existing video hashing methods are fallible
to temporal desynchronization. That is, when the query video results by
deleting or inserting some frames from the reference video, most existing
methods assume the positions of the deleted (or inserted) frames are either
perfectly known or reliably estimated. This assumption may be okay under
typical transcoding and frame-rate changes but is highly inappropriate in
adversarial scenarios such as anti-piracy video search. For example, an illegal
uploader will try to bypass the 'piracy check' mechanism of YouTube/Dailymotion
etc by performing a cleverly designed non-uniform resampling of the video. We
present a new solution based on dynamic time warping (DTW), which can implement
automatic synchronization and can be used together with existing video hashing
methods. The second contribution of this paper is to propose a new robust
feature extraction method called flow hashing (FH), based on frame averaging
and optical flow descriptors. Finally, a fusion mechanism called distance
boosting is proposed to combine the information extracted by DTW and FH.
Experiments on real video collections show that such a hash extraction and
comparison enables unprecedented robustness under both spatial and temporal
attacks.Comment: submitted to Image Processing (ICIP), 2014 21st IEEE International
Conference o
Perceptual Video Hashing for Content Identification and Authentication
Perceptual hashing has been broadly used in the literature to identify similar contents for video copy detection. It has also been adopted to detect malicious manipulations for video authentication. However, targeting both applications with a single system using the same hash would be highly desirable as this saves the storage space and reduces the computational complexity. This paper proposes a perceptual video hashing system for content identification and authentication. The objective is to design a hash extraction technique that can withstand signal processing operations on one hand and detect malicious attacks on the other hand. The proposed system relies on a new signal calibration technique for extracting the hash using the discrete cosine transform (DCT) and the discrete sine transform (DST). This consists of determining the number of samples, called the normalizing shift, that is required for shifting a digital signal so that the shifted version matches a certain pattern according to DCT/DST coefficients. The rationale for the calibration idea is that the normalizing shift resists signal processing operations while it exhibits sensitivity to local tampering (i.e., replacing a small portion of the signal with a different one). While the same hash serves both applications, two different similarity measures have been proposed for video identification and authentication, respectively. Through intensive experiments with various types of video distortions and manipulations, the proposed system has been shown to outperform related state-of-the art video hashing techniques in terms of identification and authentication with the advantageous ability to locate tampered regions
Approximate Nearest Neighbor Fields in Video
We introduce RIANN (Ring Intersection Approximate Nearest Neighbor search),
an algorithm for matching patches of a video to a set of reference patches in
real-time. For each query, RIANN finds potential matches by intersecting rings
around key points in appearance space. Its search complexity is reversely
correlated to the amount of temporal change, making it a good fit for videos,
where typically most patches change slowly with time. Experiments show that
RIANN is up to two orders of magnitude faster than previous ANN methods, and is
the only solution that operates in real-time. We further demonstrate how RIANN
can be used for real-time video processing and provide examples for a range of
real-time video applications, including colorization, denoising, and several
artistic effects.Comment: A CVPR 2015 oral pape
- …