8 research outputs found

    JPEG2000 ROI coding through component priority for digital mammography

    Get PDF
    Region Of Interest (ROI) coding is a prominent feature of some image coding systems aimed to prioritize specific areas of the image through the construction of a codestream that, decoded at increasing bit-rates, recovers the ROI first and with higher quality than the rest of the image. JPEG2000 is a wavelet-based coding system that is supported in the Digital Imaging and Communications in Medicine (DICOM) standard. Among other features, JPEG2000 provides lossy-to-lossless compression and ROI coding, which are especially relevant to the medical community. But, due to JPEG2000 supported ROI coding methods that guarantee lossless coding are not designed to achieve a high degree of accuracy to prioritize ROIs, they have not been incorporated in the medical community. - This paper introduces a ROI coding method that is able to prioritize multiple ROIs at different priorities, guaranteeing lossy-to-lossless coding. The proposed ROI Coding Through Component Prioritization (ROITCOP) method uses techniques of rate-distortion optimization combined with a simple yet effective strategy of ROI allocation that employs the multi-component support of JPEG2000 codestream. The main insight in ROITCOP is the allocation of each ROI to an component. Experimental results indicate that this ROI allocation strategy does not penalize coding performance whilst achieving an unprecedented degree of accuracy to delimit ROIs. - The proposed ROITCOP method maintains JPEG2000 compliance, thus easing its use in medical centers to share images. This paper analyzes in detail the use of ROITCOP to mammographies, where the ROIs are identified by computer-aided diagnosis. Extensive experimental tests using various ROI coding methods suggest that ROITCOP achieves enhanced coding performanc

    JPEG2000 ROI coding through component priority for digital mammography

    Get PDF
    Region Of Interest (ROI) coding is a prominent feature of some image coding systems aimed to prioritize specific areas of the image through the construction of a codestream that, decoded at increasing bit-rates, recovers the ROI first and with higher quality than the rest of the image. JPEG2000 is a wavelet-based coding system that is supported in the Digital Imaging and Communications in Medicine (DICOM) standard. Among other features, JPEG2000 provides lossy-to-lossless compression and ROI coding, which are especially relevant to the medical community. But, due to JPEG2000 supported ROI coding methods that guarantee lossless coding are not designed to achieve a high degree of accuracy to prioritize ROIs, they have not been incorporated in the medical community. - This paper introduces a ROI coding method that is able to prioritize multiple ROIs at different priorities, guaranteeing lossy-to-lossless coding. The proposed ROI Coding Through Component Prioritization (ROITCOP) method uses techniques of rate-distortion optimization combined with a simple yet effective strategy of ROI allocation that employs the multi-component support of JPEG2000 codestream. The main insight in ROITCOP is the allocation of each ROI to an component. Experimental results indicate that this ROI allocation strategy does not penalize coding performance whilst achieving an unprecedented degree of accuracy to delimit ROIs. - The proposed ROITCOP method maintains JPEG2000 compliance, thus easing its use in medical centers to share images. This paper analyzes in detail the use of ROITCOP to mammographies, where the ROIs are identified by computer-aided diagnosis. Extensive experimental tests using various ROI coding methods suggest that ROITCOP achieves enhanced coding performanc

    Development of Novel Image Compression Algorithms for Portable Multimedia Applications

    Get PDF
    Portable multimedia devices such as digital camera, mobile d evices, personal digtal assistants (PDAs), etc. have limited memory, battery life and processing power. Real time processing and transmission using these devices requires image compression algorithms that can compress efficiently with reduced complexity. Due to limited resources, it is not always possible to implement the best algorithms inside these devices. In uncompressed form, both raw and image data occupy an unreasonably large space. However, both raw and image data have a significant amount of statistical and visual redundancy. Consequently, the used storage space can be efficiently reduced by compression. In this thesis, some novel low complexity and embedded image compression algorithms are developed especially suitable for low bit rate image compression using these devices. Despite the rapid progress in the Internet and multimedia technology, demand for data storage and data transmission bandwidth continues to outstrip the capabil- ities of available technology. The browsing of images over In ternet from the image data sets using these devices requires fast encoding and decodin g speed with better rate-distortion performance. With progressive picture build up of the wavelet based coded images, the recent multimedia applications demand goo d quality images at the earlier stages of transmission. This is particularly important if the image is browsed over wireless lines where limited channel capacity, storage and computation are the deciding parameters. Unfortunately, the performance of JPEG codec degrades at low bit rates because of underlying block based DCT transforms. Altho ugh wavelet based codecs provide substantial improvements in progressive picture quality at lower bit rates, these coders do not fully exploit the coding performance at lower bit rates. It is evident from the statistics of transformed images that the number of significant coefficients having magnitude higher than earlier thresholds are very few. These wavelet based codecs code zero to each insignificant subband as it moves from coarsest to finest subbands. It is also demonstrated that there could be six to sev en bit plane passes where wavelet coders encode many zeros as many subbands are likely to be insignificant with respect to early thresholds. Bits indicating insignificance of a coefficient or subband are required, but they don’t code information that reduces distortion of the reconstructed image. This leads to reduction of zero distortion for an increase in non zero bit-rate. Another problem associated with wavelet based coders such as Set partitioning in hierarchical trees (SPIHT), Set partitioning embedded block (SPECK), Wavelet block-tree coding (WBTC) is because of the use of auxiliary lists. The size of list data structures increase exponentially as more and more eleme nts are added, removed or moved in each bitplane pass. This increases the dynamic memory requirement of the codec, which is a less efficient feature for hardware implementations. Later, many listless variants of SPIHT and SPECK, e.g. No list SPIHT (NLS) and Listless SPECK (LSK) respectively are developed. However, these algorithms have similar rate distortion performances, like the list based coders. An improved LSK (ILSK)algorithm proposed in this dissertation that improves the low b it rate performance of LSK by encoding much lesser number of symbols (i.e. zeros) to several insignificant subbands. Further, the ILSK is combined with a block based transform known as discrete Tchebichef transform (DTT). The proposed new coder isnamed as Hierar-chical listless DTT (HLDTT). DTT is chosen over DCT because of it’s similar energy compaction property like discrete cosine transform (DCT). It is demonstrated that the decoded image quality using HLDTT has better visual performance (i.e., Mean Structural Similarity) than the images decoded using DCT based embedded coders in most of the bit rates. The ILSK algorithm is also combined with Lift based wavelet tra nsform to show the superiority over JPEG2000 at lower rates in terms of peak signal-to-noise ratio (PSNR). A full-scalable and random access decodable listless algorithm is also developed which is based on lift based ILSK. The proposed algorithm named as scalable listless embedded block partitioning (S-LEBP) generates bit stream that offer increasing signal-to-noise ratio and spatial resolution. These are very useful features for transmission of images in a heterogeneous network that optimally service each user according to available bandwidth and computing needs. Random access decoding is a very useful feature for extracting/manipulating certain ar ea of an image with minimal decoding work. The idea used in ILSK is also extended to encode and decode color images. The proposed algorithm for coding color images is named as Color listless embedded block partitioning (CLEBP) algorithm. The coding efficiency of CLEBP is compared with Color SPIHT (CSPIHT) and color variant of WBTC algorithm. From the simulation results, it is shown that CLEBP exhibits a significant PSNR performance improvement over the later two algorithms on various types of images. Although many modifications to NLS and LSK have been made, the listless modification to WBTC algorithm has not been reported in the literature. Therefore,a listless variant of WBTC (named as LBTC) algorithm is proposed. LBTC not only reduces the memory requirement by 88-89% but also increases the encoding and decoding speed, while preserving the rate-distortion perform ance at the same time. Further, the combination of DCT with LBTC (named as DCT LBT) and DTT with LBTC (named as Hierarchical listless DTT, HLBTDTT) are compared with some state-of-the-art DCT based embedded coders. It is also shown that the proposed DCT-LBT and HLBT-DTT show significant PSNR improvements over almost all the embedded coders in most of the bit rates. In some multimedia applications e.g., digital camera, camco rders etc., the images always need to have a fixed pre-determined high quality. The extra effort required for quality scalability is wasted. Therefore, non-embedded algo rithms are best suited for these applications. The proposed algorithms can be made non-embedded by encoding a fixed set of bit planes at a time. Instead, a sparse orthogonal transform matrix is proposed, which can be integrated in a JEPG baseline coder. The proposed matrix promises a substantial reduction in hardware complexity with amarginal loss of image quality on a considerable range of bit rates than block based DCT or Integer DCT

    Remote Sensing Data Compression

    Get PDF
    A huge amount of data is acquired nowadays by different remote sensing systems installed on satellites, aircrafts, and UAV. The acquired data then have to be transferred to image processing centres, stored and/or delivered to customers. In restricted scenarios, data compression is strongly desired or necessary. A wide diversity of coding methods can be used, depending on the requirements and their priority. In addition, the types and properties of images differ a lot, thus, practical implementation aspects have to be taken into account. The Special Issue paper collection taken as basis of this book touches on all of the aforementioned items to some degree, giving the reader an opportunity to learn about recent developments and research directions in the field of image compression. In particular, lossless and near-lossless compression of multi- and hyperspectral images still remains current, since such images constitute data arrays that are of extremely large size with rich information that can be retrieved from them for various applications. Another important aspect is the impact of lossless compression on image classification and segmentation, where a reasonable compromise between the characteristics of compression and the final tasks of data processing has to be achieved. The problems of data transition from UAV-based acquisition platforms, as well as the use of FPGA and neural networks, have become very important. Finally, attempts to apply compressive sensing approaches in remote sensing image processing with positive outcomes are observed. We hope that readers will find our book useful and interestin

    Contributions for post processing of wavelet transform with SPIHT ROI coding and application in the transmission of images

    Get PDF
    Orientador: Yuzo IanoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A área que trata de compressão de imagem com perdas é, atualmente, de grande importância. Isso se deve ao fato de que as técnicas de compressão permitem representar de uma forma eficiente uma imagem reduzindo assim, o espaço necessário para armazenamento ou um posterior envio da imagem através de um canal de comunicações. Em particular, o algoritmo SPIHT (Set Partitioning of Hierarchical Trees) muito usado em compressão de imagens é de implementação simples e pode ser aproveitado em aplicações onde se requer uma baixa complexidade. Este trabalho propõe um esquema de compressão de imagens utilizando uma forma personalizada de armazenamento da transformada DWT (Discrete Wavelet Transform), codificação flexível da ROI (Region Of Interest) e a compressão de imagens usando o algoritmo SPIHT. A aplicação consiste na transmissão dos dados correspondentes usando-se codificação turbo. A forma personalizada de armazenamento da DWT visa um melhor aproveitamento da memória por meio do uso de algoritmo SPIHT. A codificação ROI genérica é aplicada em um nível alto da decomposição DWT. Nesse ponto, o algoritmo SPIHT serve para ressaltar e transmitir com prioridade as regiões de interesse. Os dados a serem transmitidos, visando o menor custo de processamento, são codificados com um esquema turbo convolucional. Isso porque esse esquema é de implementação simples no que concerne à codificação. A simulação é implementada em módulos separados e reutilizáveis para esta pesquisa. Os resultados das simulações mostram que o esquema proposto é uma solução que diminui a quantidade de memória utilizada bem como o custo computacional para aplicações de envio de imagens em aplicações como transmissão de imagens via satélite, radiodifusão e outras mídiasAbstract: Nowadays, the area that comes to lossy image compression is really important. This is due to the fact that compression techniques allow an efficient way to represent an image thereby reducing the space required for storage or subsequent submission of an image through a communications channel. In particular, the algorithm SPIHT (Set Partitioning of Hierarchical Trees) widely used in image compression is simple to implement and can be used in applications where a low complexity is required. This study proposes an image compression scheme using a personalized storage transform DWT (Discrete Wavelet Transform), encoding flexible ROI (Region Of Interest) and image compression algorithm using SPIHT. The application consists in a transmission of the corresponding data using turbo coding. The shape of the custom storage DWT aims to make better use of memory by reducing the amount of memory through the use of SPIHT algorithm. ROI coding is applied in a generic high-level DWT decomposition. At this point, the algorithm serves to highlight SPITH and transmit the priority areas of interest. The data to be transmitted in order to lower the cost of processing are encoded with a turbo convolutional scheme. This is due this scheme is simple to implement with regard to coding. The simulation is implemented in separate modules and reusable for this research. The simulations and analysis show that the proposed scheme is a solution that decreases the amount of memory used and the computational cost for applications to send images in applications such as image transmission via satellite, broadcasting and others mediasDoutoradoTelecomunicações e TelemáticaDoutor em Engenharia Elétric

    On the design of fast and efficient wavelet image coders with reduced memory usage

    Full text link
    Image compression is of great importance in multimedia systems and applications because it drastically reduces bandwidth requirements for transmission and memory requirements for storage. Although earlier standards for image compression were based on the Discrete Cosine Transform (DCT), a recently developed mathematical technique, called Discrete Wavelet Transform (DWT), has been found to be more efficient for image coding. Despite improvements in compression efficiency, wavelet image coders significantly increase memory usage and complexity when compared with DCT-based coders. A major reason for the high memory requirements is that the usual algorithm to compute the wavelet transform requires the entire image to be in memory. Although some proposals reduce the memory usage, they present problems that hinder their implementation. In addition, some wavelet image coders, like SPIHT (which has become a benchmark for wavelet coding), always need to hold the entire image in memory. Regarding the complexity of the coders, SPIHT can be considered quite complex because it performs bit-plane coding with multiple image scans. The wavelet-based JPEG 2000 standard is still more complex because it improves coding efficiency through time-consuming methods, such as an iterative optimization algorithm based on the Lagrange multiplier method, and high-order context modeling. In this thesis, we aim to reduce memory usage and complexity in wavelet-based image coding, while preserving compression efficiency. To this end, a run-length encoder and a tree-based wavelet encoder are proposed. In addition, a new algorithm to efficiently compute the wavelet transform is presented. This algorithm achieves low memory consumption using line-by-line processing, and it employs recursion to automatically place the order in which the wavelet transform is computed, solving some synchronization problems that have not been tackled by previous proposals. The proposed encodeOliver Gil, JS. (2006). On the design of fast and efficient wavelet image coders with reduced memory usage [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/1826Palanci

    Rate scalable image compression in the wavelet domain

    Get PDF
    This thesis explores image compression in the wavelet transform domain. This the- sis considers progressive compression based on bit plane coding. The rst part of the thesis investigates the scalar quantisation technique for multidimensional images such as colour and multispectral image. Embedded coders such as SPIHT and SPECK are known to be very simple and e cient algorithms for compression in the wavelet do- main. However, these algorithms require the use of lists to keep track of partitioning processes, and such lists involve high memory requirement during the encoding process. A listless approach has been proposed for multispectral image compression in order to reduce the working memory required. The earlier listless coders are extended into three dimensional coder so that redundancy in the spectral domain can be exploited. Listless implementation requires a xed memory of 4 bits per pixel to represent the state of each transformed coe cient. The state is updated during coding based on test of sig- ni cance. Spectral redundancies are exploited to improve the performance of the coder by modifying its scanning rules and the initial marker/state. For colour images, this is done by conducting a joint the signi cant test for the chrominance planes. In this way, the similarities between the chrominance planes can be exploited during the cod- ing process. Fixed memory listless methods that exploit spectral redundancies enable e cient coding while maintaining rate scalability and progressive transmission. The second part of the thesis addresses image compression using directional filters in the wavelet domain. A directional lter is expected to improve the retention of edge and curve information during compression. Current implementations of hybrid wavelet and directional (HWD) lters improve the contour representation of compressed images, but su er from the pseudo-Gibbs phenomenon in the smooth regions of the images. A di erent approach to directional lters in the wavelet transforms is proposed to remove such artifacts while maintaining the ability to preserve contours and texture. Imple- mentation with grayscale images shows improvements in terms of distortion rates and the structural similarity, especially in images with contours. The proposed transform manages to preserve the directional capability without pseudo-Gibbs artifacts and at the same time reduces the complexity of wavelet transform with directional lter. Fur-ther investigation to colour images shows the transform able to preserve texture and curve.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Imaging as a tool to study leaf development in Arabidopsis thaliana

    Get PDF
    In contrast to humans and animals, the body plan of a plant is not completely defined within the embryonic stages. Organ formation continues throughout plant development and this iterative and modular process is continuously controlled by environmental cues such as light, gravity, temperature, humidity and chemicals. In most plant species, the above-ground plant body is dominated by leaves, the organs specialized in photosynthesis. This process converts carbon dioxide into organic components utilizing energy from sunlight; making leaves the energy production site and the growth engine of plants. In addition, in many cases the majority of a plant’s biomass consists of leaves, also making them important organs for the production of food, feed and bio-energy. The final leaf size is determined by the total number of cells and the average cell size that result from cell division and cell expansion, respectively. During leaf development of dicotyledonous species, a cell proliferation phase, characterized by actively dividing cells, is followed by a cell expansion phase, characterized by cell growth and differentiation. After expansion, cells mature and the final leaf size is reached. At the proliferation-to-expansion phase transition, cell division ceases along a longitudinal gradient from leaf tip to base. In this thesis, we set out to gain further insight in these developmental processes affecting leaf size, assisted by the use of imaging technology and automated image analysis. For these studies we used the model species Arabidopsis thaliana, focusing primarily on the epidermis of the developing leaves as divisions there are strictly anticlinal. Moreover this layer is thought to be the main tissue layer controlling leaf growth. As a first step, we developed different image analysis tools to allow for a better and more efficient analysis of the leaf developmental process. In the first place we developed an online framework, designated Leaf Image Analysis Interface (LIMANI), in which venation patterns are automatically segmented and measured on dark-field images. Image segmentation may be manually corrected through use of an interactive interface, allowing supervision and rectification steps in the automated image analysis pipeline and ensuring high-fidelity analysis. We subsequently used this framework to study vascular differentiation during leaf development and to analyze the venation pattern in transgenic lines with contrasting cellular and leaf size traits. A major conclusion from this work was that, as vascular differentiation occurs relatively late in development, the influence of a fully functional and differentiated venation pattern on final leaf size is rather limited. Furthermore, we describe a proof-of-concept to automate the kinematic analysis of leaf growth based on DIC pictures, by a sophisticated image processing chain and a data analysis pipeline. Next, we also developed imaging scripts to extract complete seedlings grown on soil and on Petri dishes and integrated those into three phenotyping platforms which monitor plant growth. Finally, we investigated the potential of emerging imaging technologies, particularly X-ray computed tomography, for future applications in plant growth analysis. The newly developed kinematic analysis tools allowed us to show that the transcription factors, SHORT-ROOT (SHR) and SCARECROW (SCR), next to their specific roles in cortex/endodermis differentiation and stem cell maintenance in the root, primarily function as general regulators of cell proliferation in leaves. The analysis of leaf growth revealed how these proteins affect the cellular growth dynamics and formed the basis to unravel the molecular mechanism controlling this. It turned out that they promote leaf growth mainly by the down-regulation of cell cycle inhibitors, known to restrain the activity of the transcription factor, E2Fa, stimulating S-phase progression. Although the dynamics of cell division and cell expansion processes can be analyzed rigorously by the leaf growth kinematics, knowledge of cell cycle duration, cell expansion, and their interaction at the individual cell level is still poorly understood, not only because of technical obstacles to study these phenomena, but also because the processes are intimately intertwined, shown by the fact that a reduced cell proliferation is often compensated by an increase in cell size and vice versa. A mathematical model fitted to detailed cellular measurements retrieved by automated image analysis of microscopic drawings of the leaf epidermis, revealed that average cell cycle duration remains constant throughout leaf development. Surprisingly, no evidence for a maximum cell size threshold for cell division of pavement cells was found in this analysis. We could estimate the division and expansion parameters of pavement and guard cell populations within the growing leaf separately and the model predicted that neighboring cells of different sizes within the epidermis expand at distinctly different relative rates. We could finally verify this by direct observations using live imaging. The mathematical model helped us to gain a better and more detailed insight into the processes that define leaf growth. But the transition from cell proliferation to cell expansion was a developmental time point that was still not characterized in detail. Differences in the timing of this transition strongly affects the number of cells formed and therefore potentially also serves as a control point determining mature leaf size. Several genes have been identified that alter leaf size by affecting the transition from primary to secondary morphogenesis. We characterized the progression of the transition on the morphological and molecular level using transcriptome analysis and imaging algorithms to visualize and quantify the size and shape of pavement cells along the proximal-distal axis of the leaf during transition. Both analyses showed that the transition from cell proliferation to expansion was established and abolished abruptly. Furthermore, the establishment of the cell cycle arrest front occurs simultaneously with the onset of photomorphogenesis. We provide evidence that retrograde signaling from chloroplasts can affect the onset of transition, revealing a previously unknown level of regulatory complexity during the transition from primary to secondary morphogenesis
    corecore