275 research outputs found

    Locally Adaptive Resolution (LAR) codec

    Get PDF
    The JPEG committee has initiated a study of potential technologies dedicated to future generation image compression systems. The idea is to design a new norm of image compression, named JPEG AIC (Advanced Image Coding), together with advanced evaluation methodologies, closely matching to human vision system characteristics. JPEG AIC thus aimed at defining a complete coding system able to address advanced functionalities such as lossy to lossless compression, scalability (spatial, temporal, depth, quality, complexity, component, granularity...), robustness, embed-ability, content description for image handling at object level... The chosen compression method would have to fit perceptual metrics defined by the JPEG community within the JPEG AIC project. In this context, we propose the Locally Adaptive Resolution (LAR) codec as a contribution to the relative call for technologies, tending to fit all of previous functionalities. This method is a coding solution that simultaneously proposes a relevant representation of the image. This property is exploited through various complementary coding schemes in order to design a highly scalable encoder. The LAR method has been initially introduced for lossy image coding. This efficient image compression solution relies on a content-based system driven by a specific quadtree representation, based on the assumption that an image can be represented as layers of basic information and local texture. Multiresolution versions of this codec have shown their efficiency, from low bit rates up to lossless compressed images. An original hierarchical self-extracting region representation has also been elaborated: a segmentation process is realized at both coder and decoder, leading to a free segmentation map. This later can be further exploited for color region encoding, image handling at region level. Moreover, the inherent structure of the LAR codec can be used for advanced functionalities such as content securization purposes. In particular, dedicated Unequal Error Protection systems have been produced and tested for transmission over the Internet or wireless channels. Hierarchical selective encryption techniques have been adapted to our coding scheme. Data hiding system based on the LAR multiresolution description allows efficient content protection. Thanks to the modularity of our coding scheme, complexity can be adjusted to address various embedded systems. For example, basic version of the LAR coder has been implemented onto FPGA platform while respecting real-time constraints. Pyramidal LAR solution and hierarchical segmentation process have also been prototyped on DSPs heterogeneous architectures. This chapter first introduces JPEG AIC scope and details associated requirements. Then we develop the technical features, of the LAR system, and show the originality of the proposed scheme, both in terms of functionalities and services. In particular, we show that the LAR coder remains efficient for natural images, medical images, and art images

    Context-Based Scalable Coding and Representation of High Resolution Art Pictures for Remote Data Access

    Get PDF
    International audienceEROS is the largest database in the world of high resolution art pictures. The TSAR project is designed to open it in a secure, efficient and user-friendly way that involves cryptography and watermarking as well as compression and region-level representation abilities. This paper more particularly addresses the two last points. The LAR codec is first presented as a suitable solution for picture encoding with compression ranging from highly lossy to lossless. Then, we detail the concept of self-extracting region representation, which consists of performing a segmentation process at both the coder and decoder from a highly compressed image, and later locally enhancing the image in a region of interest. The overall scheme provides an efficient, consistent solution for advanced data browsing

    WG1N5315 - Response to Call for AIC evaluation methodologies and compression technologies for medical images: LAR Codec

    Get PDF
    This document presents the LAR image codec as a response to Call for AIC evaluation methodologies and compression technologies for medical images.This document describes the IETR response to the specific call for contributions of medical imaging technologies to be considered for AIC. The philosophy behind our coder is not to outperform JPEG2000 in compression; our goal is to propose an open source, royalty free, alternative image coder with integrated services. While keeping the compression performances in the same range as JPEG2000 but with lower complexity, our coder also provides services such as scalability, cryptography, data hiding, lossy to lossless compression, region of interest, free region representation and coding

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2-D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. [Continues.

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. In the proposed CODEC I, block-based disparity estimation/compensation (DE/DC) is performed in pixel domain. However, this results in an inefficiency when DWT is applied on the whole predictive error image that results from the DE process. This is because of the existence of artificial block boundaries between error blocks in the predictive error image. To overcome this problem, in the remaining proposed CODECs, DE/DC is performed in the wavelet domain. Due to the multiresolution nature of the wavelet domain, two methods of disparity estimation and compensation have been proposed. The first method is performing DEJDC in each subband of the lowest/coarsest resolution level and then propagating the disparity vectors obtained to the corresponding subbands of higher/finer resolution. Note that DE is not performed in every subband due to the high overhead bits that could be required for the coding of disparity vectors of all subbands. This method is being used in CODEC II. In the second method, DEJDC is performed m the wavelet-block domain. This enables disparity estimation to be performed m all subbands simultaneously without increasing the overhead bits required for the coding disparity vectors. This method is used by CODEC III. However, performing disparity estimation/compensation in all subbands would result in a significant improvement of CODEC III. To further improve the performance of CODEC ill, pioneering wavelet-block search technique is implemented in CODEC IV. The pioneering wavelet-block search technique enables the right/predicted image to be reconstructed at the decoder end without the need of transmitting the disparity vectors. In proposed CODEC V, pioneering block search is performed in all subbands of DWT decomposition which results in an improvement of its performance. Further, the CODEC IV and V are able to perform at very low bit rates(< 0.15 bpp). In CODEC VI and CODEC VII, Overlapped Block Disparity Compensation (OBDC) is used with & without the need of coding disparity vector. Our experiment results showed that no significant coding gains could be obtained for these CODECs over CODEC IV & V. All proposed CODECs m this thesis are wavelet-based stereo image coding algorithms that maximise the flexibility and benefits offered by wavelet transform technology when applied to stereo imaging. In addition the use of a baseline-JPEG coding architecture would enable the easy adaptation of the proposed algorithms within systems originally built for DCT-based coding. This is an important feature that would be useful during an era where DCT-based technology is only slowly being phased out to give way for DWT based compression technology. In addition, this thesis proposed a stereo image coding algorithm that uses JPEG-2000 technology as the basic compression engine. The proposed CODEC, named RASTER is a rate scalable stereo image CODEC that has a unique ability to preserve the image quality at binocular depth boundaries, which is an important requirement in the design of stereo image CODEC. The experimental results have shown that the proposed CODEC is able to achieve PSNR gains of up to 3.7 dB as compared to directly transmitting the right frame using JPEG-2000

    Wavelet Based Image Coding Schemes : A Recent Survey

    Full text link
    A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.Comment: 18 pages, 7 figures, journa

    Secured and progressive transmission of compressed images on the Internet: application to telemedicine

    Get PDF
    International audienceWithin the framework of telemedicine, the amount of images leads first to use efficient lossless compression methods for the aim of storing information. Furthermore, multiresolution scheme including Region of Interest (ROI) processing is an important feature for a remote access to medical images. What is more, the securization of sensitive data (e.g. metadata from DICOM images) constitutes one more expected functionality: indeed the lost of IP packets could have tragic effects on a given diagnosis. For this purpose, we present in this paper an original scalable image compression technique (LAR method) used in association with a channel coding method based on the Mojette Transform, so that a hierarchical priority encoding system is elaborated. This system provides a solution for secured transmission of medical images through low-bandwidth networks such as the Internet

    Motion compensation and very low bit rate video coding

    Get PDF
    Recently, many activities of the International Telecommunication Union (ITU) and the International Standard Organization (ISO) are leading to define new standards for very low bit-rate video coding, such as H.263 and MPEG-4 after successful applications of the international standards H.261 and MPEG-1/2 for video coding above 64kbps. However, at very low bit-rate the classic block matching based DCT video coding scheme suffers seriously from blocking artifacts which degrade the quality of reconstructed video frames considerably. To solve this problem, a new technique in which motion compensation is based on dense motion field is presented in this dissertation. Four efficient new video coding algorithms based on this new technique for very low bit-rate are proposed. (1) After studying model-based video coding algorithms, we propose an optical flow based video coding algorithm with thresh-olding techniques. A statistic model is established for distribution of intensity difference between two successive frames, and four thresholds are used to control the bit-rate and the quality of reconstructed frames. It outperforms the typical model-based techniques in terms of complexity and quality of reconstructed frames. (2) An efficient algorithm using DCT coded optical flow. It is found that dense motion fields can be modeled as the first order auto-regressive model, and efficiently compressed with DCT technique, hence achieving very low bit-rate and higher visual quality than the H.263/TMN5. (3) A region-based discrete wavelet transform video coding algorithm. This algorithm implements dense motion field and regions are segmented according to their content significance. The DWT is applied to residual images region by region, and bits are adaptively allocated to regions. It improves the visual quality and PSNR of significant regions while maintaining low bit-rate. (4) A segmentation-based video coding algorithm for stereo sequence. A correlation-feedback algorithm with Kalman filter is utilized to improve the accuracy of optical flow fields. Three criteria, which are associated with 3-D information, 2-D connectivity and motion vector fields, respectively, are defined for object segmentation. A chain code is utilized to code the shapes of the segmented objects. it can achieve very high compression ratio up to several thousands

    Image data compression based on a multiresolution signal model

    Get PDF
    Image data compression is an important topic within the general field of image processing. It has practical applications varying from medical imagery to video telephones, and provides significant implications for image modelling theory. In this thesis a new class of linear signal models, linear interpolative multiresolution models, is presented and applied to the data compression of a range of natural images. The key property of these models is that whilst they are non- causal in the two spatial dimensions they are causal in a third dimension, the scale dimension. This leads to computationally efficient predictors which form the basis of the data compression algorithms. Models of varying complexity are presented, ranging from a simple stationary form to one which models visually important features such as lines and edges in terms of scale and orientation. In addition to theoretical results such as related rate distortion functions, the results of applying the compression algorithms to a variety of images are presented. These results compare favourably, particularly at high compression ratios, with many of the techniques described in the literature, both in terms of mean squared quantisation noise and more meaningfully, in terms of perceived visual quality. In particular the use of local orientation over various scales within the consistent spatial interpolative framework of the model significantly reduces perceptually important distortions such as the blocking artefacts often seen with high compression coders. A new algorithm for fast computation of the orientation information required by the adaptive coder is presented which results in an overall computational complexity for the coder which is broadly comparable to that of the simpler non-adaptive coder. This thesis is concluded with a discussion of some of the important issues raised by the work
    • …
    corecore