146 research outputs found

    JPEG XR scalable coding for remote image browsing applications

    Get PDF
    The growing popularity of the Internet has opened the road to multimedia and interactivity, emphasizing the importance of visual communication. In this context, digital images have taken a lead role and have an increasing number of applications. Consider, for example, the spread that digital cameras and mobile devices such as mobile phones have become in recent years. Thus, it arises the need for a flexible system that can handle images from different sources and are able to adapt to a different view. The importance of this issue lies in the application scenario: today there are datastores with a large number of images saved in JPEG format and systems for rendering digital images are various and with very different characteristics with each other. The ISO/IEC committee has recently issued a new format, called JPEG-XR, created explicitly for the modern digital cameras. The new coding algorithm JPEG-XR, can overcome various limitations of the first JPEG algorithm and provides viable alternatives to the JPEG2000 algorithm. This research has primarily focused on issues concerning the scalability of the new format of digital images.Additional scalability levels are fundamental for image browsing applications, because enable the system to ensure a correct and efficient functioning even when there is a sharp increase in the number of resources and users.Scalability is mostly required when dealing with large image database on the Web in order to reduce the transferred data, especially when it comes to large images. The interactive browsing also requires the ability to access to arbitrary parts of the image. The starting point is the use of a client-server architecture, in which the server stores a database of JPEG XR images and analyzes requests from a client. Client and server communicate via HTTP and use an exchange protocol. In order to minimize the transferred information, the JPEG XR coded file format should make use of the frequency mode order and partitioning of images into optimized tiles. The main goal is transmitting only some subset of the available sub-band coefficients. This is necessary to allow access an interactive access to portion of images, that are downloaded and displayed, minimizing the amount of data transferred and maintaining an acceptable image quality.The proposed architecture has of course prompted a study of errors in transmission on unreliable channel, such as the wireless one, and the definition of possible optimizations/variants of the codec in order to overcome its own limitations. Image data compressed with JPEG XR when transmitted over error-prone channels is severely distorted. In fact, due to the adaptive coding strategies used by the codec, even a single bit error causes a mismatch in the alignment of the reading position from the bit-stream, leading to completely different images at the decoder side. An extension to the JPEG XR algorithm is proposed, consisting in an error recovery process enabling the decoder to realign itself to the right bit-stream position and to correctly decode the most part of the image. Several experiments have been performed using different encoder parameter and different error probabilities while image distortion is measured by PSNR objective metric. The simplicity of the proposed algorithm adds very little computational overhead and seems very promising as confirmed by objective image quality results in experimental tests

    Contributions in image and video coding

    Get PDF
    Orientador: Max Henrique Machado CostaTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: A comunidade de codificação de imagens e vídeo vem também trabalhando em inovações que vão além das tradicionais técnicas de codificação de imagens e vídeo. Este trabalho é um conjunto de contribuições a vários tópicos que têm recebido crescente interesse de pesquisadores na comunidade, nominalmente, codificação escalável, codificação de baixa complexidade para dispositivos móveis, codificação de vídeo de múltiplas vistas e codificação adaptativa em tempo real. A primeira contribuição estuda o desempenho de três transformadas 3-D rápidas por blocos em um codificador de vídeo de baixa complexidade. O codificador recebeu o nome de Fast Embedded Video Codec (FEVC). Novos métodos de implementação e ordens de varredura são propostos para as transformadas. Os coeficiente 3-D são codificados por planos de bits pelos codificadores de entropia, produzindo um fluxo de bits (bitstream) de saída totalmente embutida. Todas as implementações são feitas usando arquitetura com aritmética inteira de 16 bits. Somente adições e deslocamentos de bits são necessários, o que reduz a complexidade computacional. Mesmo com essas restrições, um bom desempenho em termos de taxa de bits versus distorção pôde ser obtido e os tempos de codificação são significativamente menores (em torno de 160 vezes) quando comparados ao padrão H.264/AVC. A segunda contribuição é a otimização de uma recente abordagem proposta para codificação de vídeo de múltiplas vistas em aplicações de video-conferência e outras aplicações do tipo "unicast" similares. O cenário alvo nessa abordagem é fornecer vídeo com percepção real em 3-D e ponto de vista livre a boas taxas de compressão. Para atingir tal objetivo, pesos são atribuídos a cada vista e mapeados em parâmetros de quantização. Neste trabalho, o mapeamento ad-hoc anteriormente proposto entre pesos e parâmetros de quantização é mostrado ser quase-ótimo para uma fonte Gaussiana e um mapeamento ótimo é derivado para fonte típicas de vídeo. A terceira contribuição explora várias estratégias para varredura adaptativa dos coeficientes da transformada no padrão JPEG XR. A ordem de varredura original, global e adaptativa do JPEG XR é comparada com os métodos de varredura localizados e híbridos propostos neste trabalho. Essas novas ordens não requerem mudanças nem nos outros estágios de codificação e decodificação, nem na definição da bitstream A quarta e última contribuição propõe uma transformada por blocos dependente do sinal. As transformadas hierárquicas usualmente exploram a informação residual entre os níveis no estágio da codificação de entropia, mas não no estágio da transformada. A transformada proposta neste trabalho é uma técnica de compactação de energia que também explora as similaridades estruturais entre os níveis de resolução. A idéia central da técnica é incluir na transformada hierárquica um número de funções de base adaptativas derivadas da resolução menor do sinal. Um codificador de imagens completo foi desenvolvido para medir o desempenho da nova transformada e os resultados obtidos são discutidos neste trabalhoAbstract: The image and video coding community has often been working on new advances that go beyond traditional image and video architectures. This work is a set of contributions to various topics that have received increasing attention from researchers in the community, namely, scalable coding, low-complexity coding for portable devices, multiview video coding and run-time adaptive coding. The first contribution studies the performance of three fast block-based 3-D transforms in a low complexity video codec. The codec has received the name Fast Embedded Video Codec (FEVC). New implementation methods and scanning orders are proposed for the transforms. The 3-D coefficients are encoded bit-plane by bit-plane by entropy coders, producing a fully embedded output bitstream. All implementation is performed using 16-bit integer arithmetic. Only additions and bit shifts are necessary, thus lowering computational complexity. Even with these constraints, reasonable rate versus distortion performance can be achieved and the encoding time is significantly smaller (around 160 times) when compared to the H.264/AVC standard. The second contribution is the optimization of a recent approach proposed for multiview video coding in videoconferencing applications or other similar unicast-like applications. The target scenario in this approach is providing realistic 3-D video with free viewpoint video at good compression rates. To achieve such an objective, weights are computed for each view and mapped into quantization parameters. In this work, the previously proposed ad-hoc mapping between weights and quantization parameters is shown to be quasi-optimum for a Gaussian source and an optimum mapping is derived for a typical video source. The third contribution exploits several strategies for adaptive scanning of transform coefficients in the JPEG XR standard. The original global adaptive scanning order applied in JPEG XR is compared with the localized and hybrid scanning methods proposed in this work. These new orders do not require changes in either the other coding and decoding stages or in the bitstream definition. The fourth and last contribution proposes an hierarchical signal dependent block-based transform. Hierarchical transforms usually exploit the residual cross-level information at the entropy coding step, but not at the transform step. The transform proposed in this work is an energy compaction technique that can also exploit these cross-resolution-level structural similarities. The core idea of the technique is to include in the hierarchical transform a number of adaptive basis functions derived from the lower resolution of the signal. A full image codec is developed in order to measure the performance of the new transform and the obtained results are discussed in this workDoutoradoTelecomunicações e TelemáticaDoutor em Engenharia Elétric

    JPEG XT: A Compression Standard for HDR and WCG Images [Standards in a Nutshell]

    Get PDF
    High bit depth data acquisition and manipulation have been largely studied at the academic level over the last 15 years and are rapidly attracting interest at the industrial level. An example of the increasing interest for high-dynamic range (HDR) imaging is the use of 32-bit floating point data for video and image acquisition and manipulation that allows a variety of visual effects that closely mimic the real-world visual experience of the end user [1] (see Figure 1). At the industrial level, we are witnessing increasing traction toward supporting HDR and wide color gamut (WCG). WCG leverages HDR for each color channel to display a wider range of colors. Consumer cameras are currently available with a 14- or 16-bit analog-to-digital converter. Rendering devices are also appearing with the capability to display HDR images and video with a peak brightness of up to 4,000 nits and to support WCG (ITU-R Rec. BT.2020 [2]) rather than the historical ITU-R Rec. BT.709 [3]. This trend calls for a widely accepted standard for higher bit depth support that can be seamlessly integrated into existing products and applications. While standard formats such as the Joint Photographic Experts Group (JPEG) 2000 [5] and JPEG XR [6] offer support for high bit depth image representations, their adoption requires a nonnegligible investment that may not always be affordable in existing imaging ecosystems, and induces a difficult transition, as they are not backward-compatible with the popular JPEG image format

    WG1N5315 - Response to Call for AIC evaluation methodologies and compression technologies for medical images: LAR Codec

    Get PDF
    This document presents the LAR image codec as a response to Call for AIC evaluation methodologies and compression technologies for medical images.This document describes the IETR response to the specific call for contributions of medical imaging technologies to be considered for AIC. The philosophy behind our coder is not to outperform JPEG2000 in compression; our goal is to propose an open source, royalty free, alternative image coder with integrated services. While keeping the compression performances in the same range as JPEG2000 but with lower complexity, our coder also provides services such as scalability, cryptography, data hiding, lossy to lossless compression, region of interest, free region representation and coding

    Preserving data integrity of encoded medical images: the LAR compression framework

    Get PDF
    International audienceThrough the development of medical imaging systems and their integration into a complete information system, the need for advanced joint coding and network services becomes predominant. PACS (Picture Archiving and Communication System) aims to acquire, store and compress, retrieve, present and distribute medical images. These systems have to be accessible via the Internet or wireless channels. Thus protection processes against transmission errors have to be added to get a powerful joint source-channel coding tool. Moreover, these sensitive data require confidentiality and privacy for both archiving and transmission purposes, leading to use cryptography and data embedding solutions. This chapter introduces data integrity protection and developed dedicated tools of content protection and secure bitstream transmission for medical encoded image purposes. In particular, the LAR image coding method is defined together with advanced securization services

    Images Identification Based on Equivalence Classes

    Get PDF
    The image identification problem consists in identifying all the equivalent forms of a given reference image. An image is equivalent to the reference image, if the former results from the application of an image operator (or a composition of image operators) to the latter. Depending on the application, different sets of image operators are considered. The equivalence quantification is done in three levels. In the first level, we construct the set of equivalent images which is composed of the reference and its modified versions obtained through the application of image operators. In the second level, visual features are extracted from images in the equivalence set and their distances to the reference image are computed. In the third level, an orthotope (generalized rectangle) is fit to the set of distance vectors corresponding to the equivalent images. The equivalence of an unknown image with respect to a given reference is defined according to whether the corresponding distance vector is inside, or outside, the orthotope. The results of our algorithm are assessed in terms of the false positive and false negative errors (computed over different choices of reference images and operators)

    Cubic-panorama image dataset analysis for storage and transmission

    Full text link
    corecore