2,453 research outputs found

    Semantic Perceptual Image Compression using Deep Convolution Networks

    Full text link
    It has long been considered a significant problem to improve the visual quality of lossy image and video compression. Recent advances in computing power together with the availability of large training data sets has increased interest in the application of deep learning cnns to address image recognition and image processing tasks. Here, we present a powerful cnn tailored to the specific task of semantic image understanding to achieve higher visual quality in lossy compression. A modest increase in complexity is incorporated to the encoder which allows a standard, off-the-shelf jpeg decoder to be used. While jpeg encoding may be optimized for generic images, the process is ultimately unaware of the specific content of the image to be compressed. Our technique makes jpeg content-aware by designing and training a model to identify multiple semantic regions in a given image. Unlike object detection techniques, our model does not require labeling of object positions and is able to identify objects in a single pass. We present a new cnn architecture directed specifically to image compression, which generates a map that highlights semantically-salient regions so that they can be encoded at higher quality as compared to background regions. By adding a complete set of features for every class, and then taking a threshold over the sum of all feature activations, we generate a map that highlights semantically-salient regions so that they can be encoded at a better quality compared to background regions. Experiments are presented on the Kodak PhotoCD dataset and the MIT Saliency Benchmark dataset, in which our algorithm achieves higher visual quality for the same compressed size.Comment: Accepted to Data Compression Conference, 11 pages, 5 figure

    Visual Saliency in Video Compression and Transmission

    Get PDF
    This dissertation explores the concept of visual saliency—a measure of propensity for drawing visual attention—and presents various novel methods for utilization of visual saliencyin video compression and transmission. Specifically, a computationally-efficient method for visual saliency estimation in digital images and videos is developed, which approximates one of the most well-known visual saliency models. In the context of video compression, a saliency-aware video coding method is proposed within a region-of-interest (ROI) video coding paradigm. The proposed video coding method attempts to reduce attention-grabbing coding artifacts and keep viewers’ attention in areas where the quality is highest. The method allows visual saliency to increase in high quality parts of the frame, and allows saliency to reduce in non-ROI parts. Using this approach, the proposed method is able to achieve the same subjective quality as competing state-of-the-art methods at a lower bit rate. In the context of video transmission, a novel saliency-cognizant error concealment method is presented for ROI-based video streaming in which regions with higher visual saliency are protected more heavily than low saliency regions. In the proposed error concealment method, a low-saliency prior is added to the error concealment process as a regularization term, which serves two purposes. First, it provides additional side information for the decoder to identify the correct replacement blocks for concealment. Second, in the event that a perfectly matched block cannot be unambiguously identified, the low-saliency prior reduces viewers’ visual attention on the loss-stricken regions, resulting in higher overall subjective quality. During the course of this research, an eye-tracking dataset for several standard video sequences was created and made publicly available. This dataset can be utilized to test saliency models for video and evaluate various perceptually-motivated algorithms for video processing and video quality assessment
    • …
    corecore