14 research outputs found

    Exclusive-or preprocessing and dictionary coding of continuous-tone images.

    Get PDF
    The field of lossless image compression studies the various ways to represent image data in the most compact and efficient manner possible that also allows the image to be reproduced without any loss. One of the most efficient strategies used in lossless compression is to introduce entropy reduction through decorrelation. This study focuses on using the exclusive-or logic operator in a decorrelation filter as the preprocessing phase of lossless image compression of continuous-tone images. The exclusive-or logic operator is simply and reversibly applied to continuous-tone images for the purpose of extracting differences between neighboring pixels. Implementation of the exclusive-or operator also does not introduce data expansion. Traditional as well as innovative prediction methods are included for the creation of inputs for the exclusive-or logic based decorrelation filter. The results of the filter are then encoded by a variation of the Lempel-Ziv-Welch dictionary coder. Dictionary coding is selected for the coding phase of the algorithm because it does not require the storage of code tables or probabilities and because it is lower in complexity than other popular options such as Huffman or Arithmetic coding. The first modification of the Lempel-Ziv-Welch dictionary coder is that image data can be read in a sequence that is linear, 2-dimensional, or an adaptive combination of both. The second modification of the dictionary coder is that the coder can instead include multiple, dynamically chosen dictionaries. Experiments indicate that the exclusive-or operator based decorrelation filter when combined with a modified Lempel-Ziv-Welch dictionary coder provides compression comparable to algorithms that represent the current standard in lossless compression. The proposed algorithm provides compression performance that is below the Context-Based, Adaptive, Lossless Image Compression (CALIC) algorithm by 23%, below the Low Complexity Lossless Compression for Images (LOCO-I) algorithm by 19%, and below the Portable Network Graphics implementation of the Deflate algorithm by 7%, but above the Zip implementation of the Deflate algorithm by 24%. The proposed algorithm uses the exclusive-or operator in the modeling phase and uses modified Lempel-Ziv-Welch dictionary coding in the coding phase to form a low complexity, reversible, and dynamic method of lossless image compression

    Scalable video compression with optimized visual performance and random accessibility

    Full text link
    This thesis is concerned with maximizing the coding efficiency, random accessibility and visual performance of scalable compressed video. The unifying theme behind this work is the use of finely embedded localized coding structures, which govern the extent to which these goals may be jointly achieved. The first part focuses on scalable volumetric image compression. We investigate 3D transform and coding techniques which exploit inter-slice statistical redundancies without compromising slice accessibility. Our study shows that the motion-compensated temporal discrete wavelet transform (MC-TDWT) practically achieves an upper bound to the compression efficiency of slice transforms. From a video coding perspective, we find that most of the coding gain is attributed to offsetting the learning penalty in adaptive arithmetic coding through 3D code-block extension, rather than inter-frame context modelling. The second aspect of this thesis examines random accessibility. Accessibility refers to the ease with which a region of interest is accessed (subband samples needed for reconstruction are retrieved) from a compressed video bitstream, subject to spatiotemporal code-block constraints. We investigate the fundamental implications of motion compensation for random access efficiency and the compression performance of scalable interactive video. We demonstrate that inclusion of motion compensation operators within the lifting steps of a temporal subband transform incurs a random access penalty which depends on the characteristics of the motion field. The final aspect of this thesis aims to minimize the perceptual impact of visible distortion in scalable reconstructed video. We present a visual optimization strategy based on distortion scaling which raises the distortion-length slope of perceptually significant samples. This alters the codestream embedding order during post-compression rate-distortion optimization, thus allowing visually sensitive sites to be encoded with higher fidelity at a given bit-rate. For visual sensitivity analysis, we propose a contrast perception model that incorporates an adaptive masking slope. This versatile feature provides a context which models perceptual significance. It enables scene structures that otherwise suffer significant degradation to be preserved at lower bit-rates. The novelty in our approach derives from a set of "perceptual mappings" which account for quantization noise shaping effects induced by motion-compensated temporal synthesis. The proposed technique reduces wavelet compression artefacts and improves the perceptual quality of video

    LIDAR data classification and compression

    Get PDF
    Airborne Laser Detection and Ranging (LIDAR) data has a wide range of applications in agriculture, archaeology, biology, geology, meteorology, military and transportation, etc. LIDAR data consumes hundreds of gigabytes in a typical day of acquisition, and the amount of data collected will continue to grow as sensors improve in resolution and functionality. LIDAR data classification and compression are therefore very important for managing, visualizing, analyzing and using this huge amount of data. Among the existing LIDAR data classification schemes, supervised learning has been used and can obtain up to 96% of accuracy. However some of the features used are not readily available, and the training data is also not always available in practice. In existing LIDAR data compression schemes, the compressed size can be 5%-23% of the original size, but still could be in the order of gigabyte, which is impractical for many applications. The objectives of this dissertation are (1) to develop LIDAR classification schemes that can classify airborne LIDAR data more accurately without some features or training data that existing work requires; (2) to explore lossy compression schemes that can compress LIDAR data at a much higher compression rate than is currently available. We first investigate two independent ways to classify LIDAR data depending on the availability of training data: when training data is available, we use supervised machine learning techniques such as support vector machine (SVM); when training data is not readily available, we develop an unsupervised classification method that can classify LIDAR data as good as supervised classification methods. Experimental results show that the accuracy of our classification results are over 99%. We then present two new lossy LIDAR data compression methods and compare their performance. The first one is a wavelet based compression scheme while the second one is geometry based. Our new geometry based compression is a geometry and statistics driven LIDAR point-cloud compression method which combines both application knowledge and scene content to enable fast transmission from the sensor platform while preserving the geometric properties of objects within a scene. The new algorithm is based on the idea of compression by classification. It utilizes the unique height function simplicity as well as the local spatial coherence and linearity of the aerial LIDAR data and can automatically compress the data to the desired level-of-details defined by the user. Either of the two developed classification methods can be used to automatically detect regions that are not locally linear such as vegetations or trees. In those regions, the local statistics descriptions, such as mean, variance, expectation, etc., are stored to efficiently represent the region and restore the geometry in the decompression phase. The new geometry-based compression schemes for building and ground data can compress efficiently and significantly reduce the file size, while retaining a good fit for the scalable "zoom in" requirements. Experimental results show that compared with existing LIDAR lossy compression work, our proposed approach achieves two orders of magnitude lower bit rate with the same quality, making it feasible for applications that were not practical before. The ability to store information into a database and query them efficiently becomes possible with the proposed highly efficient compression scheme.Includes bibliographical references (pages 106-116)

    Distortion-constraint compression of three-dimensional CLSM images using image pyramid and vector quantization

    Get PDF
    The confocal microscopy imaging techniques, which allow optical sectioning, have been successfully exploited in biomedical studies. Biomedical scientists can benefit from more realistic visualization and much more accurate diagnosis by processing and analysing on a three-dimensional image data. The lack of efficient image compression standards makes such large volumetric image data slow to transfer over limited bandwidth networks. It also imposes large storage space requirements and high cost in archiving and maintenance. Conventional two-dimensional image coders do not take into account inter-frame correlations in three-dimensional image data. The standard multi-frame coders, like video coders, although they have good performance in capturing motion information, are not efficiently designed for coding multiple frames representing a stack of optical planes of a real object. Therefore a real three-dimensional image compression approach should be investigated. Moreover the reconstructed image quality is a very important concern in compressing medical images, because it could be directly related to the diagnosis accuracy. Most of the state-of-the-arts methods are based on transform coding, for instance JPEG is based on discrete-cosine-transform CDCT) and JPEG2000 is based on discrete- wavelet-transform (DWT). However in DCT and DWT methods, the control of the reconstructed image quality is inconvenient, involving considerable costs in computation, since they are fundamentally rate-parameterized methods rather than distortion-parameterized methods. Therefore it is very desirable to develop a transform-based distortion-parameterized compression method, which is expected to have high coding performance and also able to conveniently and accurately control the final distortion according to the user specified quality requirement. This thesis describes our work in developing a distortion-constraint three-dimensional image compression approach, using vector quantization techniques combined with image pyramid structures. We are expecting our method to have: 1. High coding performance in compressing three-dimensional microscopic image data, compared to the state-of-the-art three-dimensional image coders and other standardized two-dimensional image coders and video coders. 2. Distortion-control capability, which is a very desirable feature in medical 2. Distortion-control capability, which is a very desirable feature in medical image compression applications, is superior to the rate-parameterized methods in achieving a user specified quality requirement. The result is a three-dimensional image compression method, which has outstanding compression performance, measured objectively, for volumetric microscopic images. The distortion-constraint feature, by which users can expect to achieve a target image quality rather than the compressed file size, offers more flexible control of the reconstructed image quality than its rate-constraint counterparts in medical image applications. Additionally, it effectively reduces the artifacts presented in other approaches at low bit rates and also attenuates noise in the pre-compressed images. Furthermore, its advantages in progressive transmission and fast decoding make it suitable for bandwidth limited tele-communications and web-based image browsing applications

    Lossless compression of satellite multispectral and hyperspectral images

    Get PDF
    En esta tesis se presentan nuevas técnicas de compresión sin pérdida tendientes a reducir el espacio de almacenamiento requerido por imágenes satelitales. Dos tipos principales de imágenes son tratadas: multiespectrales e hiperespectrales. En el caso de imágenes multiespectrales, se desarrolló un compresor no lineal que explota tanto las correlaciones intra como interbanda presentes en la imagen. Este se basa en la transformada wavelet de enteros a enteros y se aplica sobre bloques no solapados de la imagen. Diferentes modelos para las dependencias estadísticas de los coeficientes de detalle de la transformada wavelet son propuestos y analizados. Aquellos coeficientes que se encuentran en las subbandas de detalle fino de la transformada son modelados como una combinación afín de coeficientes vecinos y coeficientes en bandas adyacentes, sujetos a que se encuentren en la misma clase. Este modelo se utiliza para generar predicciones de otros coficientes que ya fueron codificados. La información de clase se genera mediante la cuantización LloydMax, la cual también se utiliza para predecir y como contextos de condicionamiento para codificar los errores de predicción con un codificador aritmético adaptativo. Dado que el ordenamiento de las bandas también afecta la precisión de las predicciones, un nuevo mecanismo de ordenamiento es propuesto basado en los coeficientes de detalle de los últimos dos niveles de la transformada wavelet. Los resultados obtenidos superan a los de otros compresores 2D sin pérdida como PNG, JPEG-LS, SPIHT y JPEG2000, como también a otros compresores 3D como SLSQ-OPT, JPEG-LS diferencial y JPEG2000 para imágenes a color y 3D-SPIHT. El método propuesto provee acceso aleatorio a partes de la imagen, y puede aplicarse para la compresión sin pérdida de otros datos volumétricos. Para las imágenes hiperespectrales, algoritmos como LUT o LAIS-LUT que revisten el estado del arte para la compresión sin pérdida para este tipo de imágenes, explotan la alta correlación espectral de estas imágenes y utilizan tablas de lookup para generar predicciones. A pesar de ello, existen casos donde las predicciones no son buenas. En esta tesis, se propone una modificación a estos algoritmos de lookup permitiendo diferentes niveles de confianza a las tablas de lookup en base a las variaciones locales del factor de escala. Los resultados obtenidos son altamente satisfactorios y mejores a los de LUT y LAIS-LUT. Se han diseñado dos compresores sin pérdida para dos tipos de imágenes satelitales, las cuales tienen distintas propiedades, a saber, diferente resolución espectral, espacial y radiométrica, y también de diferentes correlaciones espectrales y espaciales. En cada caso, el compresor explota estas propiedades para incrementar las tasas de compresión.In this thesis, new lossless compression techniques aiming at reducing the size of storage of satellite images are presented. Two type of images are considered: multispectral and hyperspectral. For multispectral images, a nonlinear lossless compressor that exploits both intraband and interband correlations is developed. The compressor is based on a wavelet transform that maps integers into integers, applied to tiles of the image. Different models for statistical dependencies of wavelet detail coefficients are proposed and analyzed. Wavelet coefficients belonging to the fine detail subbands are successfully modelled as an affine combination of neighboring coefficients and the coefficient at the same location in the previous band, as long as all these coefficients belong to the same landscape. This model is used to predict wavelet coefficients by means of already coded coefficients. Lloyd-Max quantization is used to extract class information, which is used in the prediction and later used as a conditioning context to encode prediction errors with an adaptive arithmetic coder. The band order affects the accuracy of predictions: a new mechanism is proposed for ordering the bands, based on the wavelet detail coefficients of the 2 finest levels. The results obtained outperform 2D lossless compressors such as PNG, JPEG-LS, SPIHT and JPEG2000 and other 3D lossless compressors such as SLSQ-OPT, differential JPEG-LS, JPEG2000 for color images and 3D-SPIHT. Our method has random access capability, and can be applied for lossless compression of other kinds of volumetric data. For hyperspectral images, state-of-the-art algorithms LUT and LAIS-LUT proposed for lossless compression, exploit high spectral correlations in these images, and use lookup tables to perform predictions. However, there are cases where their predictions are not accurate. In this thesis a modification based also on look-up tables is proposed, giving these tables different degrees of confidence, based on the local variations of the scaling factor. Our results are highly satisfactory and outperform both LUT and LAIS-LUT methods. Two lossless compressors have been designed for two different kinds of satellite images having different properties, namely, different spectral resolution, spatial resolution, and bitdepth, as well as different spectral and spatial correlations. In each case, the compressor exploits these properties to increase compression ratios.Fil:Acevedo, Daniel. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina

    Compresión sin pérdida de imágenes satelitales multiespectrales e hiperespectrales

    Get PDF
    En esta tesis se presentan nuevas técnicas de compresión sin pérdida tendientes a reducir el espacio de almacenamiento requerido por imágenes satelitales. Dos tipos principales de imágenes son tratadas: multiespectrales e hiperespectrales. En el caso de imágenes multiespectrales, se desarrolló un compresor no lineal que explota tanto las correlaciones intra como interbanda presentes en la imagen. Este se basa en la transformada wavelet de enteros a enteros y se aplica sobre bloques no solapados de la imagen. Diferentes modelos para las dependencias estadísticas de los coeficientes de detalle de la transformada wavelet son propuestos y analizados. Aquellos coeficientes que se encuentran en las subbandas de detalle fino de la transformada son modelados como una combinación afín de coeficientes vecinos y coeficientes en bandas adyacentes, sujetos a que se encuentren en la misma clase. Este modelo se utiliza para generar predicciones de otros coficientes que ya fueron codificados. La información de clase se genera mediante la cuantización LloydMax, la cual también se utiliza para predecir y como contextos de condicionamiento para codificar los errores de predicción con un codificador aritmético adaptativo. Dado que el ordenamiento de las bandas también afecta la precisión de las predicciones, un nuevo mecanismo de ordenamiento es propuesto basado en los coeficientes de detalle de los últimos dos niveles de la transformada wavelet. Los resultados obtenidos superan a los de otros compresores 2D sin pérdida como PNG, JPEG-LS, SPIHT y JPEG2000, como también a otros compresores 3D como SLSQ-OPT, JPEG-LS diferencial y JPEG2000 para imágenes a color y 3D-SPIHT. El método propuesto provee acceso aleatorio a partes de la imagen, y puede aplicarse para la compresión sin pérdida de otros datos volumétricos. Para las imágenes hiperespectrales, algoritmos como LUT o LAIS-LUT que revisten el estado del arte para la compresión sin pérdida para este tipo de imágenes, explotan la alta correlación espectral de estas imágenes y utilizan tablas de lookup para generar predicciones. A pesar de ello, existen casos donde las predicciones no son buenas. En esta tesis, se propone una modificación a estos algoritmos de lookup permitiendo diferentes niveles de confianza a las tablas de lookup en base a las variaciones locales del factor de escala. Los resultados obtenidos son altamente satisfactorios y mejores a los de LUT y LAIS-LUT. Se han diseñado dos compresores sin pérdida para dos tipos de imágenes satelitales, las cuales tienen distintas propiedades, a saber, diferente resolución espectral, espacial y radiométrica, y también de diferentes correlaciones espectrales y espaciales. En cada caso, el compresor explota estas propiedades para incrementar las tasas de compresión

    Técnicas de compresión de imágenes hiperespectrales sobre hardware reconfigurable

    Get PDF
    Tesis de la Universidad Complutense de Madrid, Facultad de Informática, leída el 18-12-2020Sensors are nowadays in all aspects of human life. When possible, sensors are used remotely. This is less intrusive, avoids interferces in the measuring process, and more convenient for the scientist. One of the most recurrent concerns in the last decades has been sustainability of the planet, and how the changes it is facing can be monitored. Remote sensing of the earth has seen an explosion in activity, with satellites now being launched on a weekly basis to perform remote analysis of the earth, and planes surveying vast areas for closer analysis...Los sensores aparecen hoy en día en todos los aspectos de nuestra vida. Cuando es posible, de manera remota. Esto es menos intrusivo, evita interferencias en el proceso de medida, y además facilita el trabajo científico. Una de las preocupaciones recurrentes en las últimas décadas ha sido la sotenibilidad del planeta, y cómo menitoirzar los cambios a los que se enfrenta. Los estudios remotos de la tierra han visto un gran crecimiento, con satélites lanzados semanalmente para analizar la superficie, y aviones sobrevolando grades áreas para análisis más precisos...Fac. de InformáticaTRUEunpu

    Development of Low Power Image Compression Techniques

    Get PDF
    Digital camera is the main medium for digital photography. The basic operation performed by a simple digital camera is, to convert the light energy to electrical energy, then the energy is converted to digital format and a compression algorithm is used to reduce memory requirement for storing the image. This compression algorithm is frequently called for capturing and storing the images. This leads us to develop an efficient compression algorithm which will give the same result as that of the existing algorithms with low power consumption. As a result the new algorithm implemented camera can be used for capturing more images then the previous one. 1) Discrete Cosine Transform (DCT) based JPEG is an accepted standard for lossy compression of still image. Quantisation is mainly responsible for the amount loss in the image quality in the process of lossy compression. A new Energy Quantisation (EQ) method proposed for speeding up the coding and decoding procedure while preserving image qu..

    High ratio wavelet video compression through real-time rate-distortion estimation.

    Get PDF
    Thesis (M.Sc.Eng.)-University of Natal, Durban, 2003.The success of the wavelet transform in the compression of still images has prompted an expanding effort to exercise this transform in the compression of video. Most existing video compression methods incorporate techniques from still image compression, such techniques being abundant, well defined and successful. This dissertation commences with a thorough review and comparison of wavelet still image compression techniques. Thereafter an examination of wavelet video compression techniques is presented. Currently, the most effective video compression system is the DCT based framework, thus a comparison between these and the wavelet techniques is also given. Based on this review, this dissertation then presents a new, low-complexity, wavelet video compression scheme. Noting from a complexity study that the generation of temporally decorrelated, residual frames represents a significant computational burden, this scheme uses the simplest such technique; difference frames. In the case of local motion, these difference frames exhibit strong spatial clustering of significant coefficients. A simple spatial syntax is created by splitting the difference frame into tiles. Advantage of the spatial clustering may then be taken by adaptive bit allocation between the tiles. This is the central idea of the method. In order to minimize the total distortion of the frame, the scheme uses the new p-domain rate-distortion estimation scheme with global numerical optimization to predict the optimal distribution of bits between tiles. Thereafter each tile is independently wavelet transformed and compressed using the SPIHT technique. Throughout the design process computational efficiency was the design imperative, thus leading to a real-time, software only, video compression scheme. The scheme is finally compared to both the current video compression standards and the leading wavelet schemes from the literature in terms of computational complexity visual quality. It is found that for local motion scenes the proposed algorithm executes approximately an order of magnitude faster than these methods, and presents output of similar quality. This algorithm is found to be suitable for implementation in mobile and embedded devices due to its moderate memory and computational requirements
    corecore