133 research outputs found

    Compressible and Learnable Encryption for Untrusted Cloud Environments

    Full text link
    With the wide/rapid spread of distributed systems for information processing, such as cloud computing and social networking, not only transmission but also processing is done on the internet. Therefore, a lot of studies on secure, efficient and flexible communications have been reported. Moreover, huge training data sets are required for machine learning and deep learning algorithms to obtain high performance. However, it requires large cost to collect enough training data while maintaining people's privacy. Nobody wants to include their personal data into datasets because providers can directly check the data. Full encryption with a state-of-the-art cipher (like RSA, or AES) is the most secure option for securing multimedia data. However, in cloud environments, data have to be computed/manipulated somewhere on the internet. Thus, many multimedia applications have been seeking a trade-off in security to enable other requirements, e.g., low processing demands, and processing and learning in the encrypted domain, Accordingly, we first focus on compressible image encryption schemes, which have been proposed for encryption-then-compression (EtC) systems, although the traditional way for secure image transmission is to use a compression-then encryption (CtE) system. EtC systems allow us to close unencrypted images to network providers, because encrypted images can be directly compressed even when the images are multiply recompressed by providers. Next, we address the issue of learnable encryption. Cloud computing and machine learning are widely used in many fields. However, they have some serious issues for end users, such as unauthorized access, data leaks, and privacy compromise, due to unreliability of providers and some accidents

    Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion

    Full text link
    We propose a novel method for adjusting luminance for multi-exposure image fusion. For the adjustment, two novel scene segmentation approaches based on luminance distribution are also proposed. Multi-exposure image fusion is a method for producing images that are expected to be more informative and perceptually appealing than any of the input ones, by directly fusing photos taken with different exposures. However, existing fusion methods often produce unclear fused images when input images do not have a sufficient number of different exposure levels. In this paper, we point out that adjusting the luminance of input images makes it possible to improve the quality of the final fused images. This insight is the basis of the proposed method. The proposed method enables us to produce high-quality images, even when undesirable inputs are given. Visual comparison results show that the proposed method can produce images that clearly represent a whole scene. In addition, multi-exposure image fusion with the proposed method outperforms state-of-the-art fusion methods in terms of MEF-SSIM, discrete entropy, tone mapped image quality index, and statistical naturalness.Comment: will be published in IEEE Transactions on Image Processin

    Convolutional Neural Networks Considering Local and Global features for Image Enhancement

    Full text link
    In this paper, we propose a novel convolutional neural network (CNN) architecture considering both local and global features for image enhancement. Most conventional image enhancement methods, including Retinex-based methods, cannot restore lost pixel values caused by clipping and quantizing. CNN-based methods have recently been proposed to solve the problem, but they still have a limited performance due to network architectures not handling global features. To handle both local and global features, the proposed architecture consists of three networks: a local encoder, a global encoder, and a decoder. In addition, high dynamic range (HDR) images are used for generating training data for our networks. The use of HDR images makes it possible to train CNNs with better-quality images than images directly captured with cameras. Experimental results show that the proposed method can produce higher-quality images than conventional image enhancement methods including CNN-based methods, in terms of various objective quality metrics: TMQI, entropy, NIQE, and BRISQUE.Comment: To appear in Proc. ICIP2019. arXiv admin note: text overlap with arXiv:1901.0568

    Encryption Inspired Adversarial Defense for Visual Classification

    Full text link
    Conventional adversarial defenses reduce classification accuracy whether or not a model is under attacks. Moreover, most of image processing based defenses are defeated due to the problem of obfuscated gradients. In this paper, we propose a new adversarial defense which is a defensive transform for both training and test images inspired by perceptual image encryption methods. The proposed method utilizes a block-wise pixel shuffling method with a secret key. The experiments are carried out on both adaptive and non-adaptive maximum-norm bounded white-box attacks while considering obfuscated gradients. The results show that the proposed defense achieves high accuracy (91.55 %) on clean images and (89.66 %) on adversarial examples with noise distance of 8/255 on CIFAR-10 dataset. Thus, the proposed defense outperforms state-of-the-art adversarial defenses including latent adversarial training, adversarial training and thermometer encoding.Comment: To be appeared on 27th IEEE International Conference on Image Processing (ICIP 2020

    HOG feature extraction from encrypted images for privacy-preserving machine learning

    Full text link
    In this paper, we propose an extraction method of HOG (histograms-of-oriented-gradients) features from encryption-then-compression (EtC) images for privacy-preserving machine learning, where EtC images are images encrypted by a block-based encryption method proposed for EtC systems with JPEG compression, and HOG is a feature descriptor used in computer vision for the purpose of object detection and image classification. Recently, cloud computing and machine learning have been spreading in many fields. However, the cloud computing has serious privacy issues for end users, due to unreliability of providers and some accidents. Accordingly, we propose a novel block-based extraction method of HOG features, and the proposed method enables us to carry out any machine learning algorithms without any influence, under some conditions. In an experiment, the proposed method is applied to a face image recognition problem under the use of two kinds of classifiers: linear support vector machine (SVM), gaussian SVM, to demonstrate the effectiveness.Comment: To appear in The 4th IEEE International Conference on Consumer Electronics (ICCE) Asia, Bankok, Thailan

    JPEG XT Image Compression with Hue Compensation for Two-Layer HDR Coding

    Full text link
    We propose a novel JPEG XT image compression with hue compensation for two-layer HDR coding. LDR images produced from JPEG XT bitstreams have some distortion in hue due to tone mapping operations. In order to suppress the color distortion, we apply a novel hue compensation method based on the maximally saturated colors. Moreover, the bitstreams generated by using the proposed method are fully compatible with the JPEG XT standard. In an experiment, the proposed method is demonstrated not only to produce images with small hue degradation but also to maintain well-mapped luminance, in terms of three kinds of criterion: TMQI, hue value in CIEDE2000, and the maximally saturated color on the constant-hue plane.Comment: To appear in The 4th IEEE International Conference on Consumer Electronics (ICCE) Asia, Bangkok Thailan

    An Image Identification Scheme of Encrypted JPEG Images for Privacy-Preserving Photo Sharing Services

    Full text link
    We propose an image identification scheme for double-compressed encrypted JPEG images that aims to identify encrypted JPEG images that are generated from an original JPEG image. To store images without any visual sensitive information on photo sharing services, encrypted JPEG images are generated by using a block-scrambling-based encryption method that has been proposed for Encryption-then-Compression systems with JPEG compression. In addition, feature vectors robust against JPEG compression are extracted from encrypted JPEG images. The use of the image encryption and feature vectors allows us to identify encrypted images recompressed multiple times. Moreover, the proposed scheme is designed to identify images re-encrypted with different keys. The results of a simulation show that the identification performance of the scheme is high even when images are recompressed and re-encrypted.Comment: This paper will be presented at IEEE International conference on Image Processing 201

    Automatic Exposure Compensation for Multi-Exposure Image Fusion

    Full text link
    This paper proposes a novel luminance adjustment method based on automatic exposure compensation for multi-exposure image fusion. Multi-exposure image fusion is a method to produce images without saturation regions, by using photos with different exposures. In conventional works, it has been pointed out that the quality of those multi-exposure images can be improved by adjusting the luminance of them. However, how to determine the degree of adjustment has never been discussed. This paper therefore proposes a way to automatically determines the degree on the basis of the luminance distribution of input multi-exposure images. Moreover, new weights, called "simple weights", for image fusion are also considered for the proposed luminance adjustment method. Experimental results show that the multi-exposure images adjusted by the proposed method have better quality than the input multi-exposure ones in terms of the well-exposedness. It is also confirmed that the proposed simple weights provide the highest score of statistical naturalness and discrete entropy in all fusion methods.Comment: To appear in Proc. ICIP2018 October 07-10, 2018, Athens, Greec

    A Pseudo Multi-Exposure Fusion Method Using Single Image

    Full text link
    This paper proposes a novel pseudo multi-exposure image fusion method based on a single image. Multi-exposure image fusion is used to produce images without saturation regions, by using photos with different exposures. However, it is difficult to take photos suited for the multi-exposure image fusion when we take a photo of dynamic scenes or record a video. In addition, the multi-exposure image fusion cannot be applied to existing images with a single exposure or videos. The proposed method enables us to produce pseudo multi-exposure images from a single image. To produce multi-exposure images, the proposed method utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, it is shown that the use of a local contrast enhancement method allows us to produce pseudo multi-exposure images with higher quality. Most of conventional multi-exposure image fusion methods are also applicable to the proposed multi-exposure images. Experimental results show the effectiveness of the proposed method by comparing the proposed one with conventional ones.Comment: To appear in IEICE Trans. Fundamentals, vol.E101-A, no.11, November 201

    Two-Layer Lossless HDR Coding using Histogram Packing Technique with Backward Compatibility to JPEG

    Full text link
    An efficient two-layer coding method using the histogram packing technique with the backward compatibility to the legacy JPEG is proposed in this paper. The JPEG XT, which is the international standard to compress HDR images, adopts two-layer coding scheme for backward compatibility to the legacy JPEG. However, this two-layer coding structure does not give better lossless performance than the other existing methods for HDR image compression with single-layer structure. Moreover, the lossless compression of the JPEG XT has a problem on determination of the coding parameters; The lossless performance is affected by the input images and/or the parameter values. That is, finding appropriate combination of the values is necessary to achieve good lossless performance. It is firstly pointed out that the histogram packing technique considering the histogram sparseness of HDR images is able to improve the performance of lossless compression. Then, a novel two-layer coding with the histogram packing technique and an additional lossless encoder is proposed. The experimental results demonstrate that not only the proposed method has a better lossless compression performance than that of the JPEG XT, but also there is no need to determine image-dependent parameter values for good compression performance without losing the backward compatibility to the well known legacy JPEG standard.Comment: To appear in IEICE Trans. Fundamentals, vol.E101-A, no.11, November 2018. arXiv admin note: substantial text overlap with arXiv:1806.1074
    • …
    corecore