23,025 research outputs found

    A TWO COMPONENT MEDICAL IMAGE COMPRESSION TECHNIQUES FOR DICOM IMAGES

    Get PDF
    To meet the demand for high speed transmission of image, efficient image storage, remote treatment an efficient image compression technique is essential. Wavelet theory has great potential in medical image compression. Most of the commercial medical image viewers do not provide scalability in image compression. This paper discusses a medical application that contains a viewer for digital imaging and communications in medicine (DICOM) images as a core module. Progressive transmission of medical images through internet has emerged as a promising protocol for teleradiology applications. The major issue that arises in teleradiology is the difficulty of transmitting large volume of medical data with relatively low bandwidth. Recent image compression techniques have increased the viability by reducing the bandwidth requirement and cost-effective delivery of medical images for primary diagnosis. This paper presents an effective algorithm to compress and reconstruct Digital Imaging and Communications in Medicine (DICOM) images. DICOM is a standard for handling, storing, printing and transmitting information in medical imaging. These medical images are volumetric consisting of a series of sequences of slices through a given part of the body. DICOM image is first decomposed by Haar Wavelet Decomposition Method. The wavelet coefficients are encoded using Set Partitioning in Hierarchical Trees (SPIHT) algorithm. Discrete Cosine Transform (DCT) is performed on the images and the coefficients are JPEG coded. The quality of the compressed image by different method are compared and the method exhibiting highest Peak Signal to Noise Ratio (PSNR) is retained for the image. The performance of the compression of medical images using the above said technique is studied with the two component medical image compression techniques

    Progressive Transmission Techniques in Medical Imaging

    Get PDF
    Abstract: Progressive Transmission of an image permits the gradual improvement in the quality. When the size of the image has to be more the progressive transmission helps in restoring the actual quality of the image. This can be used in Medical Image transmission in order to maintain the quality of the image transmission. Also it can be used for picture archiving and communication systems (PACS).It can be used to achieve high effective compression ratio by eliminating the need to transmit the unnecessary portion of the images when the transmission is interrupted. This technique is particularly useful where the bandwidth of the channel is limited and the amount of data is large. In order to send image data progressively, the data should be organized hierarchically in the order of importance, for example from the global characteristics of an image to the local detail. In this paper the various methods used in the progressive transmission of medical images are analyzed and a Haar wavelet -based image transmission scheme which uses the discrete wavelet transform to transform a digital image from spatial domain into frequency domain done. The concurrent computing used here has significantly reduced the computation time overhead as well as the transmission time to a great level

    A Progressive Universal Noiseless Coder

    Get PDF
    The authors combine pruned tree-structured vector quantization (pruned TSVQ) with Itoh's (1987) universal noiseless coder. By combining pruned TSVQ with universal noiseless coding, they benefit from the “successive approximation” capabilities of TSVQ, thereby allowing progressive transmission of images, while retaining the ability to noiselessly encode images of unknown statistics in a provably asymptotically optimal fashion. Noiseless compression results are comparable to Ziv-Lempel and arithmetic coding for both images and finely quantized Gaussian sources

    ROI coding of volumetric medical images with application to visualisation

    Get PDF

    High-performance compression of visual information - A tutorial review - Part I : Still Pictures

    Get PDF
    Digital images have become an important source of information in the modern world of communication systems. In their raw form, digital images require a tremendous amount of memory. Many research efforts have been devoted to the problem of image compression in the last two decades. Two different compression categories must be distinguished: lossless and lossy. Lossless compression is achieved if no distortion is introduced in the coded image. Applications requiring this type of compression include medical imaging and satellite photography. For applications such as video telephony or multimedia applications, some loss of information is usually tolerated in exchange for a high compression ratio. In this two-part paper, the major building blocks of image coding schemes are overviewed. Part I covers still image coding, and Part II covers motion picture sequences. In this first part, still image coding schemes have been classified into predictive, block transform, and multiresolution approaches. Predictive methods are suited to lossless and low-compression applications. Transform-based coding schemes achieve higher compression ratios for lossy compression but suffer from blocking artifacts at high-compression ratios. Multiresolution approaches are suited for lossy as well for lossless compression. At lossy high-compression ratios, the typical artifact visible in the reconstructed images is the ringing effect. New applications in a multimedia environment drove the need for new functionalities of the image coding schemes. For that purpose, second-generation coding techniques segment the image into semantically meaningful parts. Therefore, parts of these methods have been adapted to work for arbitrarily shaped regions. In order to add another functionality, such as progressive transmission of the information, specific quantization algorithms must be defined. A final step in the compression scheme is achieved by the codeword assignment. Finally, coding results are presented which compare stateof- the-art techniques for lossy and lossless compression. The different artifacts of each technique are highlighted and discussed. Also, the possibility of progressive transmission is illustrated

    LOCMIC:LOw Complexity Multi-resolution Image Compression

    Get PDF
    Image compression is a well-established and extensively researched field. The huge interest in it has been aroused by the rapid enhancements introduced in imaging techniques and the various applications that use high-resolution images (e.g. medical, astronomical, Internet applications). The image compression algorithms should not only give state-of-art performance, they should also provide other features and functionalities such as progressive transmission. Often, a rough approximation (thumbnail) of an image is sufficient for the user to decide whether to continue the image transmission or to abort; which accordingly helps to reduce time and bandwidth. That in turn necessitated the development of multi-resolution image compression schemes. The existed multi-resolution schemes (e.g., Multi-Level Progressive method) have shown high computational efficiency, but with a lack of the compression performance, in general. In this thesis, a LOw Complexity Multi-resolution Image Compression (LOCMIC) based on the Hierarchical INTerpolation (HINT) framework is presented. Moreover, a novel integration of the Just Noticeable Distortion (JND) for perceptual coding with the HINT framework to achieve a visual-lossless multi-resolution scheme has been proposed. In addition, various prediction formulas, a context-based prediction correction model and a multi-level Golomb parameter adaption approach have been investigated. The proposed LOCMIC (the lossless and the visual lossless) has contributed to the compression performance. The lossless LOCMIC has achieved a 3% reduced bit rate over LOCO-I, about 1% over JPEG2000, 3% over SPIHT, and 2% over CALIC. The Perceptual LOCMIC has been better in terms of bit rate than near-lossless JPEG-LS (at NEAR=2) with about 4.7%. Moreover, the decorrelation efficiency of the LOCMIC in terms of entropy has shown an advance of 2.8%, 4.5% than the MED and the conventional HINT respectively
    • …
    corecore