23 research outputs found

    Graph-based transforms based on prediction inaccuracy modeling for pathology image coding

    Get PDF
    Digital pathology images are multi giga-pixel color images that usually require large amounts of bandwidth to be transmitted and stored. Lossy compression using intra - prediction offers an attractive solution to reduce the storage and transmission requirements of these images. In this paper, we evaluate the performance of the Graph - based Transform (GBT) within the context of block - based predictive transform coding. To this end, we introduce a novel framework that eliminates the need to signal graph information to the decoder to recover the coefficients. This is accomplished by computing the GBT using predicted residual blocks, which are predicted by a modeling approach that employs only the reference samples and information about the prediction mode. Evaluation results on several pathology images, in terms of the energy preserved and MSE when a small percentage of the largest coefficients are used for reconstruction, show that the GBT can outperform the DST and DCT

    Prediction-based coding with rate control for lossless region of interest in pathology imaging

    Get PDF
    Online collaborative tools for medical diagnosis produced from digital pathology images have experimented an increase in demand in recent years. Due to the large sizes of pathology images, rate control (RC) techniques that allow an accurate control of compressed file sizes are critical to meet existing bandwidth restrictions while maximizing retrieved image quality. Recently, some RC contributions to Region of Interest (RoI) coding for pathology imaging have been presented. These encode the RoI without loss and the background with some loss, and focus on providing high RC accuracy for the background area. However, none of these RC contributions deal efficiently with arbitrary RoI shapes, which hinders the accuracy of background definition and rate control. This manuscript presents a novel coding system based on prediction with a novel RC algorithm for RoI coding that allows arbitrary RoIs shapes. Compared to other methods of the state of the art, our proposed algorithm significantly improves upon their RC accuracy, while reducing the compressed data rate for the RoI by 30%. Furthermore, it offers higher quality in the reconstructed background areas, which has been linked to better clinical performance by expert pathologists. Finally, the proposed method also allows lossless compression of both the RoI and the background, producing data volumes 14% lower than coding techniques included in DICOM, such as HEVC and JPEG-LS

    Rate control for HEVC intra-coding based on piecewise linear approximations

    Get PDF
    This paper proposes a rate control (RC) algorithm for intra-coded sequences (I-frames) within the context of block-based predictive transform coding (PTC) that employs piecewise linear approximations of the rate-distortion (RD) curve of each frame. Specifically, it employs information about the rate (R) and distortion (D) of already compressed blocks within the current frame to linearly approximate the slope of the corresponding RD curve. The proposed algorithm is implemented in the High-Efficiency Video Coding (HEVC) standard and compared with the current HEVC RC algorithm, which is based on a trained rate lambda (R-λ) model. Evaluations on a variety of intra-coded sequences show that the proposed RC algorithm not only attains the overall target bit rate more accurately than the current RC algorithm but is also capable of encoding each I-frame at a more constant bit rate according to the overall bit budget, thus avoiding high bit rate fluctuations across the sequence

    JNCD-based perceptual compression of RGB 4:4:4 image data

    Get PDF
    In contemporary lossy image coding applications, a desired aim is to decrease, as much as possible, bits per pixel without inducing perceptually conspicuous distortions in RGB image data. In this paper, we propose a novel color-based perceptual compression technique, named RGB-PAQ. RGB-PAQ is based on CIELAB Just Noticeable Color Difference (JNCD) and Human Visual System (HVS) spectral sensitivity. We utilize CIELAB JNCD and HVS spectral sensitivity modeling to separately adjust quantization levels at the Coding Block (CB) level. In essence, our method is designed to capitalize on the inability of the HVS to perceptually differentiate photons in very similar wavelength bands. In terms of application, the proposed technique can be used with RGB (4:4:4) image data of various bit depths and spatial resolutions including, for example, true color and deep color images in HD and Ultra HD resolutions. In the evaluations, we compare RGB-PAQ with a set of anchor methods; namely, HEVC, JPEG, JPEG 2000 and Google WebP. Compared with HEVC HM RExt, RGB-PAQ achieves up to 77.8% bits reductions. The subjective evaluations confirm that the compression artifacts induced by RGB-PAQ proved to be either imperceptible (MOS = 5) or near-imperceptible (MOS = 4) in the vast majority of cases

    Compresión Digital en Imágenes Médicas

    Get PDF
    Imaging technology has long played a principal role in the medical domain, and as such, its use is widespread in the diagnosis and treatment of numerous health conditions. Concurrently, new developments in imaging techniques and sensor technology make possible the acquisition of increasingly detailed images of several organs of the human body. This improvement is indeed advantageous for medical practitioners. However, it comes to a cost in the form of storage and telecommunication infrastructures needed to handle high-resolution images reliably. Ordinarily, digital compression is a mainstay in the efficient management of digital media, including still images and video. From a technical point of view, medical imaging could take full advantage of digital compression technology. However, nuances unique to medical data impose constraints to the application of digital compression in medical images. This paper presents an overview of digital compression in the context of still medical images, along with a brief discussion on related regulatory and legal implications.La Imagenología desempeña un papel protagónico en el campo médico, siendo su uso ampliamente generalizado en el diagnóstico y tratamiento de numerosos trastornos de la salud.Nuevos desarrollos en la adquisición de imágenes y en la tecnología de sensores hacen posible obtener imágenes más detalladas de varios órganos del cuerpo humano. Tal mejora es ciertamente ventajosa para la práctica médica, pero supone un encarecimiento de los recursos tecnológicos necesarios para manejar imágenes de alta resolución de manera confiable. Comúnmente, el manejo eficiente de medios digitales se apoya principalmente en la compresión digital. Desde un punto de vista técnico, las imágenes médicas podrían aprovechar las ventajas de la compresión digital. Sin embargo, peculiaridades de los datos médicos imponen restricciones a su uso. Este artículo presenta un vistazo a la compresión digital en el contexto de las imágenes médicas, y una breve discusión de los aspectos regulatorios y legales asociados a su uso

    Mosaic-Based Color-Transform Optimization for Lossy and Lossy-to-Lossless Compression of Pathology Whole-Slide Images

    Get PDF
    Altres ajuts: This work has been funded by the EU Marie Curie CIG Programme under Grant PIMCO, the Engineering and Physical Sciences Research Council (EPSRC), UKThe use of whole-slide images (WSIs) in pathology entails stringent storage and transmission requirements because of their huge dimensions. Therefore, image compression is an essential tool to enable efficient access to these data. In particular, color transforms are needed to exploit the very high degree of inter-component correlation and obtain competitive compression performance. Even though the state-of-the-art color transforms remove some redundancy, they disregard important details of the compression algorithm applied after the transform. Therefore, their coding performance is not optimal. We propose an optimization method called mosaic optimization for designing irreversible and reversible color transforms simultaneously optimized for any given WSI and the subsequent compression algorithm. Mosaic optimization is designed to attain reasonable computational complexity and enable continuous scanner operation. Exhaustive experimental results indicate that, for JPEG 2000 at identical compression ratios, the optimized transforms yield images more similar to the original than the other state-of-the-art transforms. Specifically, irreversible optimized transforms outperform the Karhunen-Loève Transform in terms of PSNR (up to 1.1 dB), the HDR-VDP-2 visual distortion metric (up to 3.8 dB), and the accuracy of computer-aided nuclei detection tasks (F1 score up to 0.04 higher). In addition, reversible optimized transforms achieve PSNR, HDR-VDP-2, and nuclei detection accuracy gains of up to 0.9 dB, 7.1 dB, and 0.025, respectively, when compared with the reversible color transform in lossy-to-lossless compression regimes

    Graph based transforms for block-based predictive transform coding

    Get PDF
    Orthogonal transforms are the key aspects of the encoding and decoding process in many state-of-the-art compression systems. The transforms in blockbased predictive transform coding (PTC) is essential for improving coding performance, as it allows decorrelating the signal in the form of transform coefficients. Recently, the Graph-Based Transform (GBT), has been shown to attain promising results for data decorrelation and energy compaction especially for block-based PTC. However, in order to reconstruct a frame for GBT using block-based PTC, extra-information is needed to be signalled into the bitstream, which may lead to an increased overhead. Additionally, the same graph should be available at the reconstruction stage to compute the inverse GBT of each block. In this thesis, we propose a set of a novel class of GBTs to enhance the performance of transform. These GBTs adopt several methods to address the issue of the availability of the same graph at the decoder while reconstructing video frames. Our methods to predict the graph can be categorized in two types: non-learning-based approaches and deep learning (DL) based prediction. For the first type our method uses reference samples and template-based strategies for reconstructing the same graph. For our next strategies we learn the graphs so that the information needed to compute the inverse transform is common knowledge between the compression and reconstruction processes. Finally, we train our model online to avoid the amount, quality, and relevance of the training data. Our evaluation is based on all the possible classes of HEVC videos, consist of class A to F/Screen content based on their varied resolution and characteristics. Our experimental results show that the proposed transforms outperforms the other non-trainable transforms, such as DCT and DCT/DST, which are commonly employed in current video codecs in terms of compression and reconstruction quality

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Analysis-driven lossy compression of DNA microarray images

    Get PDF
    DNA microarrays are one of the fastest-growing new technologies in the field of genetic research, and DNA microarray images continue to grow in number and size. Since analysis techniques are under active and ongoing development, storage, transmission and sharing of DNA microarray images need be addressed, with compression playing a significant role. However, existing lossless coding algorithms yield only limited compression performance (compression ratios below 2:1), whereas lossy coding methods may introduce unacceptable distortions in the analysis process. This work introduces a novel Relative Quantizer (RQ), which employs non-uniform quantization intervals designed for improved compression while bounding the impact on the DNA microarray analysis. This quantizer constrains the maximum relative error introduced into quantized imagery, devoting higher precision to pixels critical to the analysis process. For suitable parameter choices, the resulting variations in the DNA microarray analysis are less than half of those inherent to the experimental variability. Experimental results reveal that appropriate analysis can still be performed for average compression ratios exceeding 4.5:1
    corecore