5,871 research outputs found

    T2CI-GAN: Text to Compressed Image generation using Generative Adversarial Network

    Full text link
    The problem of generating textual descriptions for the visual data has gained research attention in the recent years. In contrast to that the problem of generating visual data from textual descriptions is still very challenging, because it requires the combination of both Natural Language Processing (NLP) and Computer Vision techniques. The existing methods utilize the Generative Adversarial Networks (GANs) and generate the uncompressed images from textual description. However, in practice, most of the visual data are processed and transmitted in the compressed representation. Hence, the proposed work attempts to generate the visual data directly in the compressed representation form using Deep Convolutional GANs (DCGANs) to achieve the storage and computational efficiency. We propose GAN models for compressed image generation from text. The first model is directly trained with JPEG compressed DCT images (compressed domain) to generate the compressed images from text descriptions. The second model is trained with RGB images (pixel domain) to generate JPEG compressed DCT representation from text descriptions. The proposed models are tested on an open source benchmark dataset Oxford-102 Flower images using both RGB and JPEG compressed versions, and accomplished the state-of-the-art performance in the JPEG compressed domain. The code will be publicly released at GitHub after acceptance of paper.Comment: Accepted for publication at IAPR's 6th CVIP 202

    Digital Forensic Technique for Multiple Compression based JPEG Forgery

    Get PDF
    In today's digital world digital multimedia like, images, voice-notes and videos etc., are the major source of information/data exchange. The authenticity of these multimedia is greatly vital in the legitimate business, media world and broadcast industry. However, with enormous multiplication of ease, simple-to-utilize data manipulation tools and softwares lead to the faithfulness of digital images is in question. In our work, we propose a technique to identify digital forgery or tampering in JPEG (Joint Photographic Experts Group) images which are based on multiple compression factor. We are dealing with the JPEG images on the grounds because JPEG is the standard storage format used in almost all present day digital devices like digital camera, camcorder, mobile devices and other image acquisition devices. JPEG compresses a image to the best compression in-order to manage the storage requirement. JPEG is a lossy compression standard. At the point when an assailant or criminal modifies some region/part of a JPEG image by any image processing tools and save it, the modified region of the image is doubly-compressed. In our work, we exploit this multiple compression in JPEG images to distinguish digital forgery or falsification

    Aligned and Non-Aligned Double JPEG Detection Using Convolutional Neural Networks

    Full text link
    Due to the wide diffusion of JPEG coding standard, the image forensic community has devoted significant attention to the development of double JPEG (DJPEG) compression detectors through the years. The ability of detecting whether an image has been compressed twice provides paramount information toward image authenticity assessment. Given the trend recently gained by convolutional neural networks (CNN) in many computer vision tasks, in this paper we propose to use CNNs for aligned and non-aligned double JPEG compression detection. In particular, we explore the capability of CNNs to capture DJPEG artifacts directly from images. Results show that the proposed CNN-based detectors achieve good performance even with small size images (i.e., 64x64), outperforming state-of-the-art solutions, especially in the non-aligned case. Besides, good results are also achieved in the commonly-recognized challenging case in which the first quality factor is larger than the second one.Comment: Submitted to Journal of Visual Communication and Image Representation (first submission: March 20, 2017; second submission: August 2, 2017
    corecore