264 research outputs found

    Efficient Encoding of Wireless Capsule Endoscopy Images Using Direct Compression of Colour Filter Array Images

    Get PDF
    Since its invention in 2001, wireless capsule endoscopy (WCE) has played an important role in the endoscopic examination of the gastrointestinal tract. During this period, WCE has undergone tremendous advances in technology, making it the first-line modality for diseases from bleeding to cancer in the small-bowel. Current research efforts are focused on evolving WCE to include functionality such as drug delivery, biopsy, and active locomotion. For the integration of these functionalities into WCE, two critical prerequisites are the image quality enhancement and the power consumption reduction. An efficient image compression solution is required to retain the highest image quality while reducing the transmission power. The issue is more challenging due to the fact that image sensors in WCE capture images in Bayer Colour filter array (CFA) format. Therefore, standard compression engines provide inferior compression performance. The focus of this thesis is to design an optimized image compression pipeline to encode the capsule endoscopic (CE) image efficiently in CFA format. To this end, this thesis proposes two image compression schemes. First, a lossless image compression algorithm is proposed consisting of an optimum reversible colour transformation, a low complexity prediction model, a corner clipping mechanism and a single context adaptive Golomb-Rice entropy encoder. The derivation of colour transformation that provides the best performance for a given prediction model is considered as an optimization problem. The low complexity prediction model works in raster order fashion and requires no buffer memory. The application of colour transformation yields lower inter-colour correlation and allows the efficient independent encoding of the colour components. The second compression scheme in this thesis is a lossy compression algorithm with a integer discrete cosine transformation at its core. Using the statistics obtained from a large dataset of CE image, an optimum colour transformation is derived using the principal component analysis (PCA). The transformed coefficients are quantized using optimized quantization table, which was designed with a focus to discard medically irrelevant information. A fast demosaicking algorithm is developed to reconstruct the colour image from the lossy CFA image in the decoder. Extensive experiments and comparisons with state-of-the-art lossless image compression methods establish the superiority of the proposed compression methods as simple and efficient image compression algorithm. The lossless algorithm can transmit the image in a lossless manner within the available bandwidth. On the other hand, performance evaluation of lossy compression algorithm indicates that it can deliver high quality images at low transmission power and low computation costs

    Image Compression Techniques: A Survey in Lossless and Lossy algorithms

    Get PDF
    The bandwidth of the communication networks has been increased continuously as results of technological advances. However, the introduction of new services and the expansion of the existing ones have resulted in even higher demand for the bandwidth. This explains the many efforts currently being invested in the area of data compression. The primary goal of these works is to develop techniques of coding information sources such as speech, image and video to reduce the number of bits required to represent a source without significantly degrading its quality. With the large increase in the generation of digital image data, there has been a correspondingly large increase in research activity in the field of image compression. The goal is to represent an image in the fewest number of bits without losing the essential information content within. Images carry three main type of information: redundant, irrelevant, and useful. Redundant information is the deterministic part of the information, which can be reproduced without loss from other information contained in the image. Irrelevant information is the part of information that has enormous details, which are beyond the limit of perceptual significance (i.e., psychovisual redundancy). Useful information, on the other hand, is the part of information, which is neither redundant nor irrelevant. Human usually observes decompressed images. Therefore, their fidelities are subject to the capabilities and limitations of the Human Visual System. This paper provides a survey on various image compression techniques, their limitations, compression rates and highlights current research in medical image compression

    Digital watermarking : applicability for developing trust in medical imaging workflows state of the art review

    Get PDF
    Medical images can be intentionally or unintentionally manipulated both within the secure medical system environment and outside, as images are viewed, extracted and transmitted. Many organisations have invested heavily in Picture Archiving and Communication Systems (PACS), which are intended to facilitate data security. However, it is common for images, and records, to be extracted from these for a wide range of accepted practices, such as external second opinion, transmission to another care provider, patient data request, etc. Therefore, confirming trust within medical imaging workflows has become essential. Digital watermarking has been recognised as a promising approach for ensuring the authenticity and integrity of medical images. Authenticity refers to the ability to identify the information origin and prove that the data relates to the right patient. Integrity means the capacity to ensure that the information has not been altered without authorisation. This paper presents a survey of medical images watermarking and offers an evident scene for concerned researchers by analysing the robustness and limitations of various existing approaches. This includes studying the security levels of medical images within PACS system, clarifying the requirements of medical images watermarking and defining the purposes of watermarking approaches when applied to medical images

    A low complexity image compression algorithm for Bayer color filter array

    Get PDF
    Digital image in their raw form requires an excessive amount of storage capacity. Image compression is a process of reducing the cost of storage and transmission of image data. The compression algorithm reduces the file size so that it requires less storage or transmission bandwidth. This work presents a new color transformation and compression algorithm for the Bayer color filter array (CFA) images. In a full color image, each pixel contains R, G, and B components. A CFA image contains single channel information in each pixel position, demosaicking is required to construct a full color image. For each pixel, demosaicking constructs the missing two-color information by using information from neighbouring pixels. After demosaicking, each pixel contains R, G, and B information, and a full color image is constructed. Conventional CFA compression occurs after the demosaicking. However, the Bayer CFA image can be compressed before demosaicking which is called compression-first method, and the algorithm proposed in this research follows the compression-first or direct compression method. The compression-first method applies the compression algorithm directly onto the CFA data and shifts demosaicking to the other end of the transmission and storage process. The advantage of the compression-first method is that it requires three time less transmission bandwidth for each pixel than conventional compression. Compression-first method of CFA data produces spatial redundancy, artifacts, and false high frequencies. The process requires a color transformation with less correlation among the color components than that Bayer RGB color space. This work analyzes correlation coefficient, standard deviation, entropy, and intensity range of the Bayer RGB color components. The analysis provides two efficient color transformations in terms of features of color transformation. The proposed color components show lesser correlation coefficient than occurs with the Bayer RGB color components. Color transformations reduce both the spatial and spectral redundancies of the Bayer CFA image. After color transformation, the components are independently encoded using differential pulse-code modulation (DPCM) in raster order fashion. The residue error of DPCM is mapped to a positive integer for the adaptive Golomb rice code. The compression algorithm includes both the adaptive Golomb rice and Unary coding to generate bit stream. Extensive simulation analysis is performed on both simulated CFA and real CFA datasets. This analysis is extended for the WCE (wireless capsule endoscopic) images. The compression algorithm is also realized with a simulated WCE CFA dataset. The results show that the proposed algorithm requires less bits per pixel than the conventional CFA compression. The algorithm also outperforms recent works on CFA compression algorithms for both real and simulated CFA datasets

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    3D Reconstruction of Small Solar System Bodies using Rendered and Compressed Images

    Get PDF
    Synthetic image generation and reconstruction of Small Solar System Bodies and the influence of compression is becoming an important study topic because of the advent of small spacecraft in deep space missions. Most of these missions are fly-by scenarios, for example in the Comet Interceptor mission. Due to limited data budgets of small satellite missions, maximising scientific return requires investigating effects of lossy compression. A preliminary simulation pipeline had been developed that uses physics-based rendering in combination with procedural terrain generation to overcome limitations of currently used methods for image rendering like the Hapke model. The rendered Small Solar System Body images are combined with a star background and photometrically calibrated to represent realistic imagery. Subsequently, a Structure-from-Motion pipeline reconstructs three-dimensional models from the rendered images. In this work, the preliminary simulation pipeline was developed further into the Space Imaging Simulator for Proximity Operations software package and a compression package was added. The compression package was used to investigate effects of lossy compression on reconstructed models and the possible amount of data reduction of lossy compression to lossless compression. Several scenarios with varying fly-by distances ranging from 50 km to 400 km and body sizes of 1 km and 10 km were simulated and compressed with lossless and several quality levels of lossy compression using PNG and JPEG 2000 respectively. It was found that low compression ratios introduce artefacts resembling random noise while high compression ratios remove surface features. The random noise artefacts introduced by low compression ratios frequently increased the number of vertices and faces of the reconstructed three-dimensional model

    Digital watermarking in medical images

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University, 05/12/2005.This thesis addresses authenticity and integrity of medical images using watermarking. Hospital Information Systems (HIS), Radiology Information Systems (RIS) and Picture Archiving and Communication Systems (P ACS) now form the information infrastructure for today's healthcare as these provide new ways to store, access and distribute medical data that also involve some security risk. Watermarking can be seen as an additional tool for security measures. As the medical tradition is very strict with the quality of biomedical images, the watermarking method must be reversible or if not, region of Interest (ROI) needs to be defined and left intact. Watermarking should also serve as an integrity control and should be able to authenticate the medical image. Three watermarking techniques were proposed. First, Strict Authentication Watermarking (SAW) embeds the digital signature of the image in the ROI and the image can be reverted back to its original value bit by bit if required. Second, Strict Authentication Watermarking with JPEG Compression (SAW-JPEG) uses the same principal as SAW, but is able to survive some degree of JPEG compression. Third, Authentication Watermarking with Tamper Detection and Recovery (AW-TDR) is able to localise tampering, whilst simultaneously reconstructing the original image

    Image coding for monochrome and colour images.

    Get PDF
    This work is an investigation of different algorithms to implement a lossy compression scheme. Special emphasis was focused on the quantization techniques. A CODEC (coder/decoder) based on a scheme proposed for standardization by a group known as JPEG (Joint Photographic Experts Group) was developed. Finally, a new decoding approach was developed, based on modifying the concepts of transition table used in compilers to break a binary string into variable length codes. The JPEG algorithm works in sequential mode by dividing the image into small blocks of 8 x 8 pixels. Each block is compressed separately by processing it through an 8 x 8 Discrete Cosine Transform, Quantization, Run length and Huffman coding. The two dimensional DCT was implemented by a fast 1-D DCT expanded into a 2-D DCT, using the row-column method. Quantization is carried out by dividing the transformed block by the JPEG scaling matrix and rounding the results to the nearest integer. It was found to work well for a large number of images. Four static Huffman code tables are used to convert the quantized DCT coefficients into variable length codes for both monochrome and colour images. The algorithm is capable of obtaining varying compression ratios by simply changing the scaling factor of the JPEG Quantization Matrix . The bit rate achieved was in the range of 1 bit/pixel for images indistinguishable from the original. Higher compression ratios can be obtained at the cost of lower image quality.Dept. of Electrical and Computer Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis1993 .S465. Source: Masters Abstracts International, Volume: 32-02, page: 0691. Adviser: M. A. Sid-Ahmed. Thesis (M.A.Sc.)--University of Windsor (Canada), 1993

    Open data and digital morphology

    Get PDF
    Over the past two decades, the development of methods for visualizing and analysing specimens digitally, in three and even four dimensions, has transformed the study of living and fossil organisms. However, the initial promise, that the widespread application of such methods would facilitate access to the underlying digital data, has not been fully achieved. The underlying datasets for many published studies are not readily or freely available, introducing a barrier to verification and reproducibility, and the reuse of data. There is no current agreement or policy on the amount and type of data that should be made available alongside studies that use, and in some cases are wholly reliant on, digital morphology. Here, we propose a set of recommendations for minimum standards and additional best practice for 3D digital data publication, and review the issues around data storage, management and accessibility
    corecore