572 research outputs found

    Robust Image Hashing Based Efficient Authentication for Smart Industrial Environment

    Full text link
    [EN] Due to large volume and high variability of editing tools, protecting multimedia contents, and ensuring their privacy and authenticity has become an increasingly important issue in cyber-physical security of industrial environments, especially industrial surveillance. The approaches authenticating images using their principle content emerge as popular authentication techniques in industrial video surveillance applications. But maintaining a good tradeoff between perceptual robustness and discriminations is the key research challenge in image hashing approaches. In this paper, a robust image hashing method is proposed for efficient authentication of keyframes extracted from surveillance video data. A novel feature extraction strategy is employed in the proposed image hashing approach for authentication by extracting two important features: the positions of rich and nonzero low edge blocks and the dominant discrete cosine transform (DCT) coefficients of the corresponding rich edge blocks, keeping the computational cost at minimum. Extensive experiments conducted from different perspectives suggest that the proposed approach provides a trustworthy and secure way of multimedia data transmission over surveillance networks. Further, the results vindicate the suitability of our proposal for real-time authentication and embedded security in smart industrial applications compared to state-of-the-art methods.This work was supported in part by the National Natural Science Foundation of China under Grant 61976120, in part by the Natural Science Foundation of Jiangsu Province under Grant BK20191445, in part by the Six Talent Peaks Project of Jiangsu Province under Grant XYDXXJS-048, and sponsored by Qing Lan Project of Jiangsu Province, China.Sajjad, M.; Ul Haq, I.; Lloret, J.; Ding, W.; Muhammad, K. (2019). Robust Image Hashing Based Efficient Authentication for Smart Industrial Environment. IEEE Transactions on Industrial Informatics. 15(12):6541-6550. https://doi.org/10.1109/TII.2019.2921652S65416550151

    Multiple Hashing Integration for Real-Time Large Scale Part-to-Part Video Matching

    Full text link
    A real-time large scale part-to-part video matching algorithm, based on the cross correlation of the intensity of motion curves, is proposed with a view to originality recognition, video database cleansing, copyright enforcement, video tagging or video result re-ranking. Moreover, it is suggested how the most representative hashes and distance functions - strada, discrete cosine transformation, Marr-Hildreth and radial - should be integrated in order for the matching algorithm to be invariant against blur, compression and rotation distortions: (R; _) 2 [1; 20]_[1; 8], from 512_512 to 32_32pixels2 and from 10 to 180_. The DCT hash is invariant against blur and compression up to 64x64 pixels2. Nevertheless, although its performance against rotation is the best, with a success up to 70%, it should be combined with the Marr-Hildreth distance function. With the latter, the image selected by the DCT hash should be at a distance lower than 1.15 times the Marr-Hildreth minimum distance

    NeuralHash for Privacy Preserving Image Analysis

    Get PDF
    This thesis aims to investigate how Apple's NeuralHash algorithm can be used in the context of facial recognition to improve privacy in facial recognition systems. Existing facial recognition solutions rely on having facial images available to match identities, however, this can impair the privacy of individuals, as the images can contain sensitive information that the individuals do not want to share. In this thesis, the NeuralHash algorithm is used to hash facial images of subjects in the ColorFERET Dataset, and the NeuralHashes are compared to attempt to identify the same subjects and different subjects. The NeuralHash algorithm's ability to hide information is also investigated, in addition to collision- and evasion attacks on NeuralHash. The results show that using a threshold of approximately 0.24, the false acceptance rate and false rejection rate are 9.68 %. If the threshold is set to 0.1, the false acceptance rate drops to 0.16 %, while the false rejection rate rises to 31.45 %. Some general information about images such as gender can be inferred from the NeuralHash, while more nuanced information is not retrievable. Gradient-based attacks can be used against NeuralHash to evade collisions, and to force collisions with a target NeuralHash

    A Video Summarization Approach to Speed-up the Analysis of Child Sexual Exploitation Material

    Get PDF
    [Abstract] Identifying key content from a video is essential for many security applications such as motion/action detection, person re-identification and recognition. Moreover, summarizing the key information from Child Sexual Exploitation Materials, especially videos, which mainly contain distinctive scenes including people’s faces is crucial to speed-up the investigation of Law Enforcement Agencies. In this paper, we present a video summarization strategy that combines perceptual hashing and face detection algorithms to keep the most relevant frames of a video containing people’s faces that may correspond to victims or offenders. Due to legal constraints to access Child Sexual Abuse datasets, we evaluated the performance of the proposed strategy during the detection of adult pornography content with the NDPI-800 dataset. Also, we assessed the capability of our strategy to create video summaries preserving frames with distinctive faces from the original video using ten additional short videos manually labeled. Results showed that our approach can detect pornography content with an accuracy of 84.15% at a speed of 8.05 ms/frame making this appropriate for realtime applications.This work was supported by the framework agreement between the Universidad de León and INCIBE (Spanish National Cybersecurity Institute) under Addendum 01. Also, this research has been funded with support from the European Commission under the 4NSEEK project with Grant Agreement 821966. This publication reflects the views only of the authors, and the European Commission cannot be held responsible for any use which may be made of the information contained therein. Finally, we acknowledge the NVIDIA Corporation for the donation of the TITAN Xp GPU

    Two Decades of Colorization and Decolorization for Images and Videos

    Full text link
    Colorization is a computer-aided process, which aims to give color to a gray image or video. It can be used to enhance black-and-white images, including black-and-white photos, old-fashioned films, and scientific imaging results. On the contrary, decolorization is to convert a color image or video into a grayscale one. A grayscale image or video refers to an image or video with only brightness information without color information. It is the basis of some downstream image processing applications such as pattern recognition, image segmentation, and image enhancement. Different from image decolorization, video decolorization should not only consider the image contrast preservation in each video frame, but also respect the temporal and spatial consistency between video frames. Researchers were devoted to develop decolorization methods by balancing spatial-temporal consistency and algorithm efficiency. With the prevalance of the digital cameras and mobile phones, image and video colorization and decolorization have been paid more and more attention by researchers. This paper gives an overview of the progress of image and video colorization and decolorization methods in the last two decades.Comment: 12 pages, 19 figure
    • …
    corecore