135 research outputs found

    Learned Image Compression with Generalized Octave Convolution and Cross-Resolution Parameter Estimation

    Full text link
    The application of the context-adaptive entropy model significantly improves the rate-distortion (R-D) performance, in which hyperpriors and autoregressive models are jointly utilized to effectively capture the spatial redundancy of the latent representations. However, the latent representations still contain some spatial correlations. In addition, these methods based on the context-adaptive entropy model cannot be accelerated in the decoding process by parallel computing devices, e.g. FPGA or GPU. To alleviate these limitations, we propose a learned multi-resolution image compression framework, which exploits the recently developed octave convolutions to factorize the latent representations into the high-resolution (HR) and low-resolution (LR) parts, similar to wavelet transform, which further improves the R-D performance. To speed up the decoding, our scheme does not use context-adaptive entropy model. Instead, we exploit an additional hyper layer including hyper encoder and hyper decoder to further remove the spatial redundancy of the latent representation. Moreover, the cross-resolution parameter estimation (CRPE) is introduced into the proposed framework to enhance the flow of information and further improve the rate-distortion performance. An additional information-fidelity loss is proposed to the total loss function to adjust the contribution of the LR part to the final bit stream. Experimental results show that our method separately reduces the decoding time by approximately 73.35 % and 93.44 % compared with that of state-of-the-art learned image compression methods, and the R-D performance is still better than H.266/VVC(4:2:0) and some learning-based methods on both PSNR and MS-SSIM metrics across a wide bit rates.Comment: Accepted by signal processin

    Fast and High-Performance Learned Image Compression With Improved Checkerboard Context Model, Deformable Residual Module, and Knowledge Distillation

    Full text link
    Deep learning-based image compression has made great progresses recently. However, many leading schemes use serial context-adaptive entropy model to improve the rate-distortion (R-D) performance, which is very slow. In addition, the complexities of the encoding and decoding networks are quite high and not suitable for many practical applications. In this paper, we introduce four techniques to balance the trade-off between the complexity and performance. We are the first to introduce deformable convolutional module in compression framework, which can remove more redundancies in the input image, thereby enhancing compression performance. Second, we design a checkerboard context model with two separate distribution parameter estimation networks and different probability models, which enables parallel decoding without sacrificing the performance compared to the sequential context-adaptive model. Third, we develop an improved three-step knowledge distillation and training scheme to achieve different trade-offs between the complexity and the performance of the decoder network, which transfers both the final and intermediate results of the teacher network to the student network to help its training. Fourth, we introduce L1L_{1} regularization to make the numerical values of the latent representation more sparse. Then we only encode non-zero channels in the encoding and decoding process, which can greatly reduce the encoding and decoding time. Experiments show that compared to the state-of-the-art learned image coding scheme, our method can be about 20 times faster in encoding and 70-90 times faster in decoding, and our R-D performance is also 2.3%2.3 \% higher. Our method outperforms the traditional approach in H.266/VVC-intra (4:4:4) and some leading learned schemes in terms of PSNR and MS-SSIM metrics when testing on Kodak and Tecnick-40 datasets.Comment: Submitted to Trans. Journa

    Improved Hybrid Layered Image Compression using Deep Learning and Traditional Codecs

    Full text link
    Recently deep learning-based methods have been applied in image compression and achieved many promising results. In this paper, we propose an improved hybrid layered image compression framework by combining deep learning and the traditional image codecs. At the encoder, we first use a convolutional neural network (CNN) to obtain a compact representation of the input image, which is losslessly encoded by the FLIF codec as the base layer of the bit stream. A coarse reconstruction of the input is obtained by another CNN from the reconstructed compact representation. The residual between the input and the coarse reconstruction is then obtained and encoded by the H.265/HEVC-based BPG codec as the enhancement layer of the bit stream. Experimental results using the Kodak and Tecnick datasets show that the proposed scheme outperforms the state-of-the-art deep learning-based layered coding scheme and traditional codecs including BPG in both PSNR and MS-SSIM metrics across a wide range of bit rates, when the images are coded in the RGB444 domain.Comment: Submitted to Signal Processing: Image Communicatio

    Type I interferons suppress viral replication but contribute to T cell depletion and dysfunction during chronic HIV-1 infection

    Get PDF
    The direct link between sustained type I interferon (IFN-I) signaling and HIV-1-induced immunopathogenesis during chronic infection remains unclear. Here we report studies using a monoclonal antibody to block IFN-α/β receptor 1 (IFNAR1) signaling during persistent HIV-1 infection in humanized mice (hu-mice). We discovered that, during chronic HIV-1 infection, IFNAR blockade increased viral replication, which was correlated with elevated T cell activation. Thus, IFN-Is suppress HIV-1 replication during the chronic phase but are not essential for HIV-1-induced aberrant immune activation. Surprisingly, IFNAR blockade rescued both total human T cell and HIV-specific T cell numbers despite elevated HIV-1 replication and immune activation. We showed that IFNAR blockade reduced HIV-1-induced apoptosis of CD4+ T cells. Importantly, IFNAR blockade also rescued the function of human T cells, including HIV-1-specific CD8+ and CD4+ T cells. We conclude that during persistent HIV-1 infection, IFN-Is suppress HIV-1 replication, but contribute to depletion and dysfunction of T cells

    Blocking type I interferon signaling enhances T cell recovery and reduces HIV-1 reservoirs

    Get PDF
    Despite the efficient suppression of HIV-1 replication that can be achieved with combined antiretroviral therapy (cART), low levels of type I interferon (IFN-I) signaling persist in some individuals. This sustained signaling may impede immune recovery and foster viral persistence. Here we report studies using a monoclonal antibody to block IFN-α/β receptor (IFNAR) signaling in humanized mice (hu-mice) that were persistently infected with HIV-1. We discovered that effective cART restored the number of human immune cells in HIV-1–infected hu-mice but did not rescue their immune hyperactivation and dysfunction. IFNAR blockade fully reversed HIV-1–induced immune hyperactivation and rescued anti–HIV-1 immune responses in T cells from HIV-1–infected hu-mice. Finally, we found that IFNAR blockade in the presence of cART reduced the size of HIV-1 reservoirs in lymphoid tissues and delayed HIV-1 rebound after cART cessation in the HIV-1–infected hu-mice. We conclude that low levels of IFN-I signaling contribute to HIV-1–associated immune dysfunction and foster HIV-1 persistence in cART-treated hosts. Our results suggest that blocking IFNAR may provide a potential strategy to enhance immune recovery and reduce HIV-1 reservoirs in individuals with sustained elevations in IFN-I signaling during suppressive cART
    • …
    corecore