187 research outputs found

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Lossless Compression of Medical Image Sequences Using a Resolution Independent Predictor and Block Adaptive Encoding

    Get PDF
    The proposed block-based lossless coding technique presented in this paper targets at compression of volumetric medical images of 8-bit and 16-bit depth. The novelty of the proposed technique lies in its ability of threshold selection for prediction and optimal block size for encoding. A resolution independent gradient edge detector is used along with the block adaptive arithmetic encoding algorithm with extensive experimental tests to find a universal threshold value and optimal block size independent of image resolution and modality. Performance of the proposed technique is demonstrated and compared with benchmark lossless compression algorithms. BPP values obtained from the proposed algorithm show that it is capable of effective reduction of inter-pixel and coding redundancy. In terms of coding efficiency, the proposed technique for volumetric medical images outperforms CALIC and JPEG-LS by 0.70 % and 4.62 %, respectively

    Diagnostic Compression of Biomedical Volumes

    Get PDF
    In this work we deal with lossy compression of biomedical volumes. By force of circumstances, diagnostic compression is bound to a subjective judgment. However, with respect to the algorithms, there is a need to shape the coding methodology so as to highlight beyond compression three important factors: the medical data, the specic usage and the particular end-user. Biomedical volumes may have very dierent characteristics which derive from imaging modality, resolution and voxel aspect ratio. Moreover, volumes are usually viewed slice by slice on a lightbox, according to dierent cutting direction (typically one of the three voxel axes). We will see why and how these aspects impact on the choice of the coding algorithm and on a possible extension of 2D well known algorithms to more ecient 3D versions. Cross-correlation between reconstruction error and signal is a key aspect to keep into account; we suggest to apply a non uniform quantization to wavelet coefficients in order to reduce slice PSNR variation. Once a good neutral coding for a certain volume is obtained, non uniform quantization can also be made space variant in order to reach more objective quality on Volumes of Diagnostic Interest (VoDI), which in turns can determine the diagnostic quality of the entire data set

    CompaCT: Fractal-Based Heuristic Pixel Segmentation for Lossless Compression of High-Color DICOM Medical Images

    Full text link
    Medical image compression is a widely studied field of data processing due to its prevalence in modern digital databases. This domain requires a high color depth of 12 bits per pixel component for accurate analysis by physicians, primarily in the DICOM format. Standard raster-based compression of images via filtering is well-known; however, it remains suboptimal in the medical domain due to non-specialized implementations. This study proposes a lossless medical image compression algorithm, CompaCT, that aims to target spatial features and patterns of pixel concentration for dynamically enhanced data processing. The algorithm employs fractal pixel traversal coupled with a novel approach of segmentation and meshing between pixel blocks for preprocessing. Furthermore, delta and entropy coding are applied to this concept for a complete compression pipeline. The proposal demonstrates that the data compression achieved via fractal segmentation preprocessing yields enhanced image compression results while remaining lossless in its reconstruction accuracy. CompaCT is evaluated in its compression ratios on 3954 high-color CT scans against the efficiency of industry-standard compression techniques (i.e., JPEG2000, RLE, ZIP, PNG). Its reconstruction performance is assessed with error metrics to verify lossless image recovery after decompression. The results demonstrate that CompaCT can compress and losslessly reconstruct medical images, being 37% more space-efficient than industry-standard compression systems.Comment: (8/24/2023) v1a: 16 pages, 9 figures, Word PD

    Design of Multiplier for Medical Image Compression Using Urdhava Tiryakbhyam Sutra

    Get PDF
    Compressing the medical images is one of the challenging areas in healthcare industry which calls for an effective design of the compression algorithms. The conventional compression algorithms used on medical images doesn’t offer enhanced computational capabilities with respect to faster processing speed and is more dependent on hardware resources. The present paper has identified the potential usage of Vedic mathematics in the form of Urdhava Tiryakbhyam sutra, which can be used for designing an efficient multiplier that can be used for enhancing the capabilities of the existing processor to generate enhance compression experience. The design of the proposed system is discussed with respect to 5 significant algorithms and the outcome of the proposed study was testified with heterogeneous samples of medical image to find that proposed system offers approximately 57% of the reduction in size without any significant loss of data

    Selective Compression of Medical Images via Intelligent Segmentation and 3D-SPIHT Coding

    Get PDF
    ABSTRACT SELECTIVE COMPRESSION OF MEDICAL IMAGES VIA INTELLIGENT SEGMENTATION AND 3D-SPIHT CODING by Bohan Fan The University of Wisconsin-Milwaukee, 2018 Under the Supervision of Professor Zeyun Yu With increasingly high resolutions of 3D volumetric medical images being widely used in clinical patient treatments, efficient image compression techniques have become in great demand due to the cost in storage and time for transmission. While various algorithms are available, the conflicts between high compression rate and the downgraded quality of the images can partially be harmonized by using the region of interest (ROI) coding technique. Instead of compressing the entire image, we can segment the image by critical diagnosis zone (the ROI zone) and background zone, and apply lossless compression or low compression rate to the former and high compression rate to the latter, without losing much clinically important information. In this thesis, we explore a medical image transmitting process that utilizes a deep learning network, called 3D-Unet to segment the region of interest area of volumetric images and 3D-SPIHT algorithm to encode the images for compression, which can be potentially used in medical data sharing scenario. In our experiments, we train a 3D-Unet on a dataset of spine images with their label ground truth, and use the trained model to extract the vertebral bodies of testing data. The segmented vertebral regions are dilated to generate the region of interest, which are subject to the 3D-SPIHT algorithm with low compress ratio while the rest of the image (background) is coded with high compress ratio to achieve an excellent balance of image quality in region of interest and high compression ratio elsewhere

    Point cloud data compression

    Get PDF
    The rapid growth in the popularity of Augmented Reality (AR), Virtual Reality (VR), and Mixed Reality (MR) experiences have resulted in an exponential surge of three-dimensional data. Point clouds have emerged as a commonly employed representation for capturing and visualizing three-dimensional data in these environments. Consequently, there has been a substantial research effort dedicated to developing efficient compression algorithms for point cloud data. This Master's thesis aims to investigate the current state-of-the-art lossless point cloud geometry compression techniques, explore some of these techniques in more detail and then propose improvements and/or extensions to enhance them and provide directions for future work on this topic
    corecore