22 research outputs found

    Effective Video Encoding in Lossless and Near-lossless Modes

    Get PDF

    Memory-efficient lossless video compression using temporal extended JPEG-LS and on-line compression

    Get PDF
    Use of temporal predictors in lossless video coders play a significant role in terms of compression gain, but comes with a cost of significant memory requirement since this approach requires to save at least one frame in buffer for residue calculation. An improvement to standard JPEG-LS based lossless video coding algorithm is proposed in this work which requires very small amount of memory comparing to the regular approach keeping the computational complexity low. To obtain a higher compression, a combination of spatial and temporal predictor model has been used where appropriate mode is selected adaptively on a pixel based analysis. Using only one reference frame, the context based temporal coder performs its calculation regarding mode selection and prediction error calculation with already reconstructed pixels. This method eliminates the overhead of transmitting the coding mode in the decoder side. The need for storage space to save the only reference frame is further reduced by introducing on-line lossy compression on that frame. Relevant pixels from the stored reference frame are obtained by partial on-the-fly decompression. The combination of temporally extended context based prediction and on-line compression achieves a significant gain in compression ratio comparing to standard frame-by-frame JPEG-LS video coding keeping the memory requirement low, making it usable as a lightweight lossless video coder for embedded systems

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    Lossless compression of satellite multispectral and hyperspectral images

    Get PDF
    En esta tesis se presentan nuevas técnicas de compresión sin pérdida tendientes a reducir el espacio de almacenamiento requerido por imágenes satelitales. Dos tipos principales de imágenes son tratadas: multiespectrales e hiperespectrales. En el caso de imágenes multiespectrales, se desarrolló un compresor no lineal que explota tanto las correlaciones intra como interbanda presentes en la imagen. Este se basa en la transformada wavelet de enteros a enteros y se aplica sobre bloques no solapados de la imagen. Diferentes modelos para las dependencias estadísticas de los coeficientes de detalle de la transformada wavelet son propuestos y analizados. Aquellos coeficientes que se encuentran en las subbandas de detalle fino de la transformada son modelados como una combinación afín de coeficientes vecinos y coeficientes en bandas adyacentes, sujetos a que se encuentren en la misma clase. Este modelo se utiliza para generar predicciones de otros coficientes que ya fueron codificados. La información de clase se genera mediante la cuantización LloydMax, la cual también se utiliza para predecir y como contextos de condicionamiento para codificar los errores de predicción con un codificador aritmético adaptativo. Dado que el ordenamiento de las bandas también afecta la precisión de las predicciones, un nuevo mecanismo de ordenamiento es propuesto basado en los coeficientes de detalle de los últimos dos niveles de la transformada wavelet. Los resultados obtenidos superan a los de otros compresores 2D sin pérdida como PNG, JPEG-LS, SPIHT y JPEG2000, como también a otros compresores 3D como SLSQ-OPT, JPEG-LS diferencial y JPEG2000 para imágenes a color y 3D-SPIHT. El método propuesto provee acceso aleatorio a partes de la imagen, y puede aplicarse para la compresión sin pérdida de otros datos volumétricos. Para las imágenes hiperespectrales, algoritmos como LUT o LAIS-LUT que revisten el estado del arte para la compresión sin pérdida para este tipo de imágenes, explotan la alta correlación espectral de estas imágenes y utilizan tablas de lookup para generar predicciones. A pesar de ello, existen casos donde las predicciones no son buenas. En esta tesis, se propone una modificación a estos algoritmos de lookup permitiendo diferentes niveles de confianza a las tablas de lookup en base a las variaciones locales del factor de escala. Los resultados obtenidos son altamente satisfactorios y mejores a los de LUT y LAIS-LUT. Se han diseñado dos compresores sin pérdida para dos tipos de imágenes satelitales, las cuales tienen distintas propiedades, a saber, diferente resolución espectral, espacial y radiométrica, y también de diferentes correlaciones espectrales y espaciales. En cada caso, el compresor explota estas propiedades para incrementar las tasas de compresión.In this thesis, new lossless compression techniques aiming at reducing the size of storage of satellite images are presented. Two type of images are considered: multispectral and hyperspectral. For multispectral images, a nonlinear lossless compressor that exploits both intraband and interband correlations is developed. The compressor is based on a wavelet transform that maps integers into integers, applied to tiles of the image. Different models for statistical dependencies of wavelet detail coefficients are proposed and analyzed. Wavelet coefficients belonging to the fine detail subbands are successfully modelled as an affine combination of neighboring coefficients and the coefficient at the same location in the previous band, as long as all these coefficients belong to the same landscape. This model is used to predict wavelet coefficients by means of already coded coefficients. Lloyd-Max quantization is used to extract class information, which is used in the prediction and later used as a conditioning context to encode prediction errors with an adaptive arithmetic coder. The band order affects the accuracy of predictions: a new mechanism is proposed for ordering the bands, based on the wavelet detail coefficients of the 2 finest levels. The results obtained outperform 2D lossless compressors such as PNG, JPEG-LS, SPIHT and JPEG2000 and other 3D lossless compressors such as SLSQ-OPT, differential JPEG-LS, JPEG2000 for color images and 3D-SPIHT. Our method has random access capability, and can be applied for lossless compression of other kinds of volumetric data. For hyperspectral images, state-of-the-art algorithms LUT and LAIS-LUT proposed for lossless compression, exploit high spectral correlations in these images, and use lookup tables to perform predictions. However, there are cases where their predictions are not accurate. In this thesis a modification based also on look-up tables is proposed, giving these tables different degrees of confidence, based on the local variations of the scaling factor. Our results are highly satisfactory and outperform both LUT and LAIS-LUT methods. Two lossless compressors have been designed for two different kinds of satellite images having different properties, namely, different spectral resolution, spatial resolution, and bitdepth, as well as different spectral and spatial correlations. In each case, the compressor exploits these properties to increase compression ratios.Fil:Acevedo, Daniel. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina

    Compresión sin pérdida de imágenes satelitales multiespectrales e hiperespectrales

    Get PDF
    En esta tesis se presentan nuevas técnicas de compresión sin pérdida tendientes a reducir el espacio de almacenamiento requerido por imágenes satelitales. Dos tipos principales de imágenes son tratadas: multiespectrales e hiperespectrales. En el caso de imágenes multiespectrales, se desarrolló un compresor no lineal que explota tanto las correlaciones intra como interbanda presentes en la imagen. Este se basa en la transformada wavelet de enteros a enteros y se aplica sobre bloques no solapados de la imagen. Diferentes modelos para las dependencias estadísticas de los coeficientes de detalle de la transformada wavelet son propuestos y analizados. Aquellos coeficientes que se encuentran en las subbandas de detalle fino de la transformada son modelados como una combinación afín de coeficientes vecinos y coeficientes en bandas adyacentes, sujetos a que se encuentren en la misma clase. Este modelo se utiliza para generar predicciones de otros coficientes que ya fueron codificados. La información de clase se genera mediante la cuantización LloydMax, la cual también se utiliza para predecir y como contextos de condicionamiento para codificar los errores de predicción con un codificador aritmético adaptativo. Dado que el ordenamiento de las bandas también afecta la precisión de las predicciones, un nuevo mecanismo de ordenamiento es propuesto basado en los coeficientes de detalle de los últimos dos niveles de la transformada wavelet. Los resultados obtenidos superan a los de otros compresores 2D sin pérdida como PNG, JPEG-LS, SPIHT y JPEG2000, como también a otros compresores 3D como SLSQ-OPT, JPEG-LS diferencial y JPEG2000 para imágenes a color y 3D-SPIHT. El método propuesto provee acceso aleatorio a partes de la imagen, y puede aplicarse para la compresión sin pérdida de otros datos volumétricos. Para las imágenes hiperespectrales, algoritmos como LUT o LAIS-LUT que revisten el estado del arte para la compresión sin pérdida para este tipo de imágenes, explotan la alta correlación espectral de estas imágenes y utilizan tablas de lookup para generar predicciones. A pesar de ello, existen casos donde las predicciones no son buenas. En esta tesis, se propone una modificación a estos algoritmos de lookup permitiendo diferentes niveles de confianza a las tablas de lookup en base a las variaciones locales del factor de escala. Los resultados obtenidos son altamente satisfactorios y mejores a los de LUT y LAIS-LUT. Se han diseñado dos compresores sin pérdida para dos tipos de imágenes satelitales, las cuales tienen distintas propiedades, a saber, diferente resolución espectral, espacial y radiométrica, y también de diferentes correlaciones espectrales y espaciales. En cada caso, el compresor explota estas propiedades para incrementar las tasas de compresión

    An Investigation of Match for Lossless Video Compression

    Get PDF
    A new lossless video compression technique, Match, is investigated. Match uses the similarity between the frames of a video or the slices of medical images to find a prediction for the current pixel. A portion of the previous frame is searched to find a matching context, which is the pixels surrounding the current pixel, within some distance centered on the current location. The best distance to use for each dataset is found experimentally. The matching context refers to the neighborhood of w, nw, n, and ne, where the pixel in the previous frame with the closest matching context becomes the prediction. w, nw, n, and ne stand for west, northwest, north, and northeast respectively. Using these directions, w is the pixel to the left of the current one, nw is the pixel to the left and up one row, n is the pixel directly above the current one and ne refers to the pixel up one row and to the right one column. From the prediction, the error is then calculated, remapped and encoded using adaptive arithmetic encoding. Match\u27s resulting compression ratio is then compared to that of CALIC\u27s, where the larger the compression ratio the more efficient the method. CALIC is a context-bases adaptive lossless image compression technique that is regarded as one of the best lossless image compression techniques. Match was evaluated for twenty-two video datasets of varying resolutions as well as 65 C.T. scans and 17 M.R.I. scans. Some common differences amongst videos are resolution and frame rate. Therefore, Match was used to compress four videos with varying resolution to see how Match is affected by resolution and Match was examined on one dataset that had varying frame rate. There were times when Match outperformed CALIC; however, there were also times where CALIC outperformed Match and other times where the two methods resulted in nearly identical compression ratios. Therefore, as a preprocessing step, the structural similarity was examined as well as the edge quality measurements to predict which method, Match or CALIC, results in the best compression. Advisor: Khalid Sayoo

    (Meta)datastandaarden voor digitale archieven

    Get PDF
    In het derde werkpakket van het project BOM-Vl (Bewaring en Ontsluiting van Multimediale data in Vlaanderen, 2008-2009) staat de technische problematiek van langetermijnbewaring van digitaal erfgoed centraal. Het OAIS-model, een ISO-standaard sinds 2002, geldt hierbij als conceptueel referentiemodel dat richtlijnen biedt bij de opzet van een digitaal archief. Aan de hand hiervan werd in een eerste deliverable aangegeven met welke representatiewijzen van de data en soorten metadata men rekening dient te houden om de preservering van digitaal materiaal te garanderen en hoe men mogelijk dataverlies kan tegengaan door grondige technische overwegingen. In een uitvoerig overzicht, een state-of-the-art, komen de gangbare opslagformaten met betrekking tot verschillend audiovisueel materiaal aan bod. Vervolgens worden ook de meest courante standaarden in het bibliotheekwezen, de omroepsector, de culturele sector en de erfgoedsector besproken, in het bijzonder metadatastandaarden (descriptieve, technische, administratieve), thesauri of ontologieën en containerformaten. Ten slotte worden twee representatieve praktijkvoorbeelden toegelicht, namelijk de ontwikkeling van het e-Depot in de Koninklijke Bibliotheek van Nederland en de opzet van een Europese meertalige zoekmachine voor cultureel erfgoedonderzoek. Dit boek is de neerslag van deze deliverable en is bedoeld als referentiewer

    Investigation of parallel programming on heterogeneous multiprocessors

    Get PDF
    Multi-core processors have become ordinary in modern commodity computers. Computationally intensive applications, like video processing, that previously only ran on specialized hardware, are now common on home computers. However, the demand for more computing power is ever-increasing, and with the introduction of high definition video, more performance is desired. As an alternative to having multiple identical processor cores, heterogeneous multiprocessors have cores with different capabilities. This allows tasks to be processed on simple cores with specialized functionality. The simplicity furthers low power consumption, small die usage, and low price. Dealing with heterogeneous cores increases the complexity of writing programs for the architecture. The reasons for this includes different capabilities of the cores, and some heterogeneous architectures do not have shared memory. Without shared memory, accessing main memory requires explicit transfers to local memory. In this thesis, we consider two architectures, the STI Cell/B.E. and Intel IXP2400, and evaluate parallelization strategies and performance for real-world problems. Our tests show promising throughput for some applications, and we propose a scheme for offloading computationally intensive parts of an existing application

    Content-aware compression for big textual data analysis

    Get PDF
    A substantial amount of information on the Internet is present in the form of text. The value of this semi-structured and unstructured data has been widely acknowledged, with consequent scientific and commercial exploitation. The ever-increasing data production, however, pushes data analytic platforms to their limit. This thesis proposes techniques for more efficient textual big data analysis suitable for the Hadoop analytic platform. This research explores the direct processing of compressed textual data. The focus is on developing novel compression methods with a number of desirable properties to support text-based big data analysis in distributed environments. The novel contributions of this work include the following. Firstly, a Content-aware Partial Compression (CaPC) scheme is developed. CaPC makes a distinction between informational and functional content in which only the informational content is compressed. Thus, the compressed data is made transparent to existing software libraries which often rely on functional content to work. Secondly, a context-free bit-oriented compression scheme (Approximated Huffman Compression) based on the Huffman algorithm is developed. This uses a hybrid data structure that allows pattern searching in compressed data in linear time. Thirdly, several modern compression schemes have been extended so that the compressed data can be safely split with respect to logical data records in distributed file systems. Furthermore, an innovative two layer compression architecture is used, in which each compression layer is appropriate for the corresponding stage of data processing. Peripheral libraries are developed that seamlessly link the proposed compression schemes to existing analytic platforms and computational frameworks, and also make the use of the compressed data transparent to developers. The compression schemes have been evaluated for a number of standard MapReduce analysis tasks using a collection of real-world datasets. In comparison with existing solutions, they have shown substantial improvement in performance and significant reduction in system resource requirements
    corecore