85 research outputs found

    Codes robustes et codes joints source-canal pour transmission multimédia sur canaux mobiles

    Get PDF
    Some new error-resilient source coding and joint source/channel coding techniquesare proposed for the transmission of multimedia sources over error-prone channels.First, we introduce a class of entropy codes providing unequal error-resilience, i.e.providing some protection to the most sensitive information. These codes are thenextended to exploit the temporal dependencies. A new state model based on the aggregation of some states of the trellis is thenproposed and analyzed for soft source decoding of variable length codes with a lengthconstraint. It allows the weighting of the compromise between the estimation accuracyand the decoding complexity.Next, some paquetization methods are proposed to reduce the error propagationphenomenon of variable length codes.Finally, some re-writing rules are proposed to extend the binary codetree representationof entropy codes. The proposed representation allows in particular the designof codes with improved soft decoding performances.Cette thèse propose des codes robustes et des codes conjoints source/canal pourtransmettre des signaux multimédia sur des canaux bruités. Nous proposons des codesentropiques offrant une résistance intrinsèque aux données prioritaires. Ces codes sontétendus pour exploiter la dépendance temporelle du signal.Un nouveau modèle d’état est ensuite proposé et analysé pour le décodage souplede codes à longueur variable avec une contrainte de longueur. Il permet de réglerfinement le compromis performance de décodage/complexité.Nous proposons également de séparer, au niveau du codage entropique, les étapesde production des mots de codes et de paquétisation. Différentes stratégies de constructionde train binaire sont alors proposées.Enfin, la représentation en arbre binaire des codes entropiques est étendue enconsidérant des règles de ré-écriture. Cela permet en particulier d’obtenir des codesqui offrent des meilleures performances en décodage souple

    Digital image compression

    Get PDF

    Digital encoding of black and white facsimile signals

    Get PDF
    As the costs of digital signal processing and memory hardware are decreasing each year compared to those of transmission, it is increasingly economical to apply sophisticated source encoding techniques to reduce the transmission time for facsimile documents. With this intent, information lossy encoding schemes have been investigated in which the encoder is divided into two stages. Firstly, preprocessing, which removes redundant information from the original documents, and secondly, actual encoding of the preprocessed documents. [Continues.

    Some new developments in image compression

    Get PDF
    This study is divided into two parts. The first part involves an investigation of near-lossless compression of digitized images using the entropy-coded DPCM method with a large number of quantization levels. Through the investigation, a new scheme that combines both lossy and lossless DPCM methods into a common framework is developed. This new scheme uses known results on the design of predictors and quantizers that incorporate properties of human visual perception. In order to enhance the compression performance of the scheme, an adaptively generated source model with multiple contexts is employed for the coding of the quantized prediction errors, rather than a memoryless model as in the conventional DPCM method. Experiments show that the scheme can provide compression in the range from 4 to 11 with a peak SNR of about 50 dB for 8-bit medical images. Also, the use of multiple contexts is found to improve compression performance by about 25% to 35%;The second part of the study is devoted to the problem of lossy image compression using tree-structured vector quantization. As a result of the study, a new design method for codebook generation is developed together with four different implementation algorithms. In the new method, an unbalanced tree-structured vector codebook is designed in a greedy fashion under the constraint of rate-distortion trade-off which can then be used to implement a variable-rate compression system. From experiments, it is found that the new method can achieve a very good rate-distortion performance while being computationally efficient. Also, due to the tree-structure of the codebook, the new method is amenable to progressive transmission applications

    Proceedings of the Scientific Data Compression Workshop

    Get PDF
    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms

    Efficient human-machine control with asymmetric marginal reliability input devices

    Get PDF
    Input devices such as motor-imagery brain-computer interfaces (BCIs) are often unreliable. In theory, channel coding can be used in the human-machine loop to robustly encapsulate intention through noisy input devices but standard feedforward error correction codes cannot be practically applied. We present a practical and general probabilistic user interface for binary input devices with very high noise levels. Our approach allows any level of robustness to be achieved, regardless of noise level, where reliable feedback such as a visual display is available. In particular, we show efficient zooming interfaces based on feedback channel codes for two-class binary problems with noise levels characteristic of modalities such as motor-imagery based BCI, with accuracy <75%. We outline general principles based on separating channel, line and source coding in human-machine loop design. We develop a novel selection mechanism which can achieve arbitrarily reliable selection with a noisy two-state button. We show automatic online adaptation to changing channel statistics, and operation without precise calibration of error rates. A range of visualisations are used to construct user interfaces which implicitly code for these channels in a way that it is transparent to users. We validate our approach with a set of Monte Carlo simulations, and empirical results from a human-in-the-loop experiment showing the approach operates effectively at 50-70% of the theoretical optimum across a range of channel conditions

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    • …
    corecore