7 research outputs found

    Restoration of the JPEG Maximum Lossy Compressed Face Images with Hourglass Block based on Early Stopping Discriminator

    Full text link
    When a JPEG image is compressed using the loss compression method with a high compression rate, a blocking phenomenon can occur in the image, making it necessary to restore the image to its original quality. In particular, restoring compressed images that are unrecognizable presents an innovative challenge. Therefore, this paper aims to address the restoration of JPEG images that have suffered significant loss due to maximum compression using a GAN-based net-work method. The generator in this network is based on the U-Net architecture and features a newly presented hourglass structure that can preserve the charac-teristics of deep layers. Additionally, the network incorporates two loss functions, LF Loss and HF Loss, to generate natural and high-performance images. HF Loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features, which can enhance performance for the high-frequency region. LF Loss, on the other hand, is used to handle the low-frequency region. These two loss functions facilitate the generation of images by the generator that can deceive the discriminator while accurately generating both high and low-frequency regions. The results show that the blocking phe-nomenon in lost compressed images was removed, and recognizable identities were generated. This study represents a significant improvement over previous research in terms of image restoration performance

    Hybrid information security system via combination of compression, cryptography, and image steganography

    Get PDF
    Today, the world is experiencing a new paradigm characterized by dynamism and rapid change due to revolutions that have gone through information and digital communication technologies, this raised many security and capacity concerns about information security transmitted via the Internet network. Cryptography and steganography are two of the most extensively that are used to ensure information security. Those techniques alone are not suitable for high security of information, so in this paper, we proposed a new system was proposed of hiding information within the image to optimize security and capacity. This system provides a sequence of steps by compressing the secret image using discrete wavelet transform (DWT) algorithm, then using the advanced encryption standard (AES) algorithm for encryption compressed data. The least significant bit (LSB) technique has been applied to hide the encrypted data. The results show that the proposed system is able to optimize the stego-image quality (PSNR value of 47.8 dB) and structural similarity index (SSIM value of 0.92). In addition, the results of the experiment proved that the combination of techniques maintains stego-image quality by 68%, improves system performance by 44%, and increases the size of secret data compared to using each technique alone. This study may contribute to solving the problem of the security and capacity of information when sent over the internet

    GOLLIC: Learning Global Context beyond Patches for Lossless High-Resolution Image Compression

    Full text link
    Neural-network-based approaches recently emerged in the field of data compression and have already led to significant progress in image compression, especially in achieving a higher compression ratio. In the lossless image compression scenario, however, existing methods often struggle to learn a probability model of full-size high-resolution images due to the limitation of the computation source. The current strategy is to crop high-resolution images into multiple non-overlapping patches and process them independently. This strategy ignores long-term dependencies beyond patches, thus limiting modeling performance. To address this problem, we propose a hierarchical latent variable model with a global context to capture the long-term dependencies of high-resolution images. Besides the latent variable unique to each patch, we introduce shared latent variables between patches to construct the global context. The shared latent variables are extracted by a self-supervised clustering module inside the model's encoder. This clustering module assigns each patch the confidence that it belongs to any cluster. Later, shared latent variables are learned according to latent variables of patches and their confidence, which reflects the similarity of patches in the same cluster and benefits the global context modeling. Experimental results show that our global context model improves compression ratio compared to the engineered codecs and deep learning models on three benchmark high-resolution image datasets, DIV2K, CLIC.pro, and CLIC.mobile
    corecore