117 research outputs found

    Low bit-rate image sequence coding

    Get PDF

    LAR vidéo: Codage sans perte à scalabilité sémantique

    No full text
    National audienceBaptisé "LAR Video", le schéma proposé dans cet article décrit un nouvel algorithme de compression vidéo sans perte avec scalabilité sémantique. Propre au codage vidéo, une étape d'estimation de mouvement est effectuée afin de produire une image d'erreur résiduelle. L'idée ici est d'appliquer sur cette erreur, une décomposition pyramidale issue d'une méthode de codage scalable d'images fixes, le LAR-APP. Résultantes d'une prédiction inter/intra niveau, les erreurs d'évaluation sont transmises progressivement. Le décodeur peut ainsi reconstruire les images de la séquence de façon scalable par niveau de résolution spatiale. Enfin au vu des résultats obtenus, nous pouvons affirmer que le schéma proposé offre en plus de la scalablité, des performances intéressantes en compression

    Real-time scalable video coding for surveillance applications on embedded architectures

    Get PDF

    Self-similarity and wavelet forms for the compression of still image and video data

    Get PDF
    This thesis is concerned with the methods used to reduce the data volume required to represent still images and video sequences. The number of disparate still image and video coding methods increases almost daily. Recently, two new strategies have emerged and have stimulated widespread research. These are the fractal method and the wavelet transform. In this thesis, it will be argued that the two methods share a common principle: that of self-similarity. The two will be related concretely via an image coding algorithm which combines the two, normally disparate, strategies. The wavelet transform is an orientation selective transform. It will be shown that the selectivity of the conventional transform is not sufficient to allow exploitation of self-similarity while keeping computational cost low. To address this, a new wavelet transform is presented which allows for greater orientation selectivity, while maintaining the orthogonality and data volume of the conventional wavelet transform. Many designs for vector quantizers have been published recently and another is added to the gamut by this work. The tree structured vector quantizer presented here is on-line and self structuring, requiring no distinct training phase. Combining these into a still image data compression system produces results which are among the best that have been published to date. An extension of the two dimensional wavelet transform to encompass the time dimension is straightforward and this work attempts to extrapolate some of its properties into three dimensions. The vector quantizer is then applied to three dimensional image data to produce a video coding system which, while not optimal, produces very encouraging results

    REGION-BASED ADAPTIVE DISTRIBUTED VIDEO CODING CODEC

    Get PDF
    The recently developed Distributed Video Coding (DVC) is typically suitable for the applications where the conventional video coding is not feasible because of its inherent high-complexity encoding. Examples include video surveillance usmg wireless/wired video sensor network and applications using mobile cameras etc. With DVC, the complexity is shifted from the encoder to the decoder. The practical application of DVC is referred to as Wyner-Ziv video coding (WZ) where an estimate of the original frame called "side information" is generated using motion compensation at the decoder. The compression is achieved by sending only that extra information that is needed to correct this estimation. An error-correcting code is used with the assumption that the estimate is a noisy version of the original frame and the rate needed is certain amount of the parity bits. The side information is assumed to have become available at the decoder through a virtual channel. Due to the limitation of compensation method, the predicted frame, or the side information, is expected to have varying degrees of success. These limitations stem from locationspecific non-stationary estimation noise. In order to avoid these, the conventional video coders, like MPEG, make use of frame partitioning to allocate optimum coder for each partition and hence achieve better rate-distortion performance. The same, however, has not been used in DVC as it increases the encoder complexity. This work proposes partitioning the considered frame into many coding units (region) where each unit is encoded differently. This partitioning is, however, done at the decoder while generating the side-information and the region map is sent over to encoder at very little rate penalty. The partitioning allows allocation of appropriate DVC coding parameters (virtual channel, rate, and quantizer) to each region. The resulting regions map is compressed by employing quadtree algorithm and communicated to the encoder via the feedback channel. The rate control in DVC is performed by channel coding techniques (turbo codes, LDPC, etc.). The performance of the channel code depends heavily on the accuracy of virtual channel model that models estimation error for each region. In this work, a turbo code has been used and an adaptive WZ DVC is designed both in transform domain and in pixel domain. The transform domain WZ video coding (TDWZ) has distinct superior performance as compared to the normal Pixel Domain Wyner-Ziv (PDWZ), since it exploits the ' spatial redundancy during the encoding. The performance evaluations show that the proposed system is superior to the existing distributed video coding solutions. Although the, proposed system requires extra bits representing the "regions map" to be transmitted, fuut still the rate gain is noticeable and it outperforms the state-of-the-art frame based DVC by 0.6-1.9 dB. The feedback channel (FC) has the role to adapt the bit rate to the changing ' statistics between the side infonmation and the frame to be encoded. In the unidirectional scenario, the encoder must perform the rate control. To correctly estimate the rate, the encoder must calculate typical side information. However, the rate cannot be exactly calculated at the encoder, instead it can only be estimated. This work also prbposes a feedback-free region-based adaptive DVC solution in pixel domain based on machine learning approach to estimate the side information. Although the performance evaluations show rate-penalty but it is acceptable considering the simplicity of the proposed algorithm. vii

    Rate-distortion optimized geometrical image processing

    Get PDF
    Since geometrical features, like edges, represent one of the most important perceptual information in an image, efficient exploitation of such geometrical information is a key ingredient of many image processing tasks, including compression, denoising and feature extraction. Therefore, the challenge for the image processing community is to design efficient geometrical schemes which can capture the intrinsic geometrical structure of natural images. This thesis focuses on developing computationally efficient tree based algorithms for attaining the optimal rate-distortion (R-D) behavior for certain simple classes of geometrical images, such as piecewise polynomial images with polynomial boundaries. A good approximation of this class allows to develop good approximation and compression schemes for images with strong geometrical features, and as experimental results show, also for real life images. We first investigate both the one dimensional (1-D) and two dimensional (2-D) piecewise polynomials signals. For the 1-D case, our scheme is based on binary tree segmentation of the signal. This scheme approximates the signal segments using polynomial models and utilizes an R-D optimal bit allocation strategy among the different signal segments. The scheme further encodes similar neighbors jointly and is called prune-join algorithm. This allows to achieve the correct exponentially decaying R-D behavior, D(R) ~ 2-cR, thus improving over classical wavelet schemes. We also show that the computational complexity of the scheme is of O(N logN). We then extend this scheme to the 2-D case using a quadtree, which also achieves an exponentially decaying R-D behavior, for the piecewise polynomial image model, with a low computational cost of O(N logN). Again, the key is an R-D optimized prune and join strategy. We further analyze the R-D performance of the proposed tree algorithms for piecewise smooth signals. We show that the proposed algorithms achieve the oracle like polynomially decaying asymptotic R-D behavior for both the 1-D and 2-D scenarios. Theoretical as well as numerical results show that the proposed schemes outperform wavelet based coders in the 2-D case. We then consider two interesting image processing problems, namely denoising and stereo image compression, in the framework of the tree structured segmentation. For the denoising problem, we present a tree based algorithm which performs denoising by compressing the noisy image and achieves improved visual quality by capturing geometrical features, like edges, of images more precisely compared to wavelet based schemes. We then develop a novel rate-distortion optimized disparity based coding scheme for stereo images. The main novelty of the proposed algorithm is that it performs the joint coding of disparity information and the residual image to achieve better R-D performance in comparison to standard block based stereo image coder

    Characterization and adaptive texture synthesis-based compression scheme

    No full text
    International audienceThis paper presents an adaptive texture synthesis-based compression scheme, where textured regions are detected and removed at encoder side, allowing the decoder to use texture synthesis to fill them. The detection relies on locally adaptive resolution segmentation. According to results shown by synthesis algorithms, they need to be parameterized according to the patterns to be synthesized. In this framework, the synthesizer gets its parameters from DCT feature-based texture descriptors. An adaptive pixel-based algorithm is used, relying on the comparison between current pixel neighborhood and those in an atypically shaped sample. Different neighborhood sizes are considered to better match texture patterns. The framework has been validated within an H.264/AVC video codec. Experimental results show significant bit-rate saving at similar visual quality

    Receiver-Driven Video Adaptation

    Get PDF
    In the span of a single generation, video technology has made an incredible impact on daily life. Modern use cases for video are wildly diverse, including teleconferencing, live streaming, virtual reality, home entertainment, social networking, surveillance, body cameras, cloud gaming, and autonomous driving. As these applications continue to grow more sophisticated and heterogeneous, a single representation of video data can no longer satisfy all receivers. Instead, the initial encoding must be adapted to each receiver's unique needs. Existing adaptation strategies are fundamentally flawed, however, because they discard the video's initial representation and force the content to be re-encoded from scratch. This process is computationally expensive, does not scale well with the number of videos produced, and throws away important information embedded in the initial encoding. Therefore, a compelling need exists for the development of new strategies that can adapt video content without fully re-encoding it. To better support the unique needs of smart receivers, diverse displays, and advanced applications, general-use video systems should produce and offer receivers a more flexible compressed representation that supports top-down adaptation strategies from an original, compressed-domain ground truth. This dissertation proposes an alternate model for video adaptation that addresses these challenges. The key idea is to treat the initial compressed representation of a video as the ground truth, and allow receivers to drive adaptation by dynamically selecting which subsets of the captured data to receive. In support of this model, three strategies for top-down, receiver-driven adaptation are proposed. First, a novel, content-agnostic entropy coding technique is implemented in which symbols are selectively dropped from an input abstract symbol stream based on their estimated probability distributions to hit a target bit rate. Receivers are able to guide the symbol dropping process by supplying the encoder with an appropriate rate controller algorithm that fits their application needs and available bandwidths. Next, a domain-specific adaptation strategy is implemented for H.265/HEVC coded video in which the prediction data from the original source is reused directly in the adapted stream, but the residual data is recomputed as directed by the receiver. By tracking the changes made to the residual, the encoder can compensate for decoder drift to achieve near-optimal rate-distortion performance. Finally, a fully receiver-driven strategy is proposed in which the syntax elements of a pre-coded video are cataloged and exposed directly to clients through an HTTP API. Instead of requesting the entire stream at once, clients identify the exact syntax elements they wish to receive using a carefully designed query language. Although an implementation of this concept is not provided, an initial analysis shows that such a system could save bandwidth and computation when used by certain targeted applications.Doctor of Philosoph

    Image synthesis based on a model of human vision

    Get PDF
    Modern computer graphics systems are able to construct renderings of such high quality that viewers are deceived into regarding the images as coming from a photographic source. Large amounts of computing resources are expended in this rendering process, using complex mathematical models of lighting and shading. However, psychophysical experiments have revealed that viewers only regard certain informative regions within a presented image. Furthermore, it has been shown that these visually important regions contain low-level visual feature differences that attract the attention of the viewer. This thesis will present a new approach to image synthesis that exploits these experimental findings by modulating the spatial quality of image regions by their visual importance. Efficiency gains are therefore reaped, without sacrificing much of the perceived quality of the image. Two tasks must be undertaken to achieve this goal. Firstly, the design of an appropriate region-based model of visual importance, and secondly, the modification of progressive rendering techniques to effect an importance-based rendering approach. A rule-based fuzzy logic model is presented that computes, using spatial feature differences, the relative visual importance of regions in an image. This model improves upon previous work by incorporating threshold effects induced by global feature difference distributions and by using texture concentration measures. A modified approach to progressive ray-tracing is also presented. This new approach uses the visual importance model to guide the progressive refinement of an image. In addition, this concept of visual importance has been incorporated into supersampling, texture mapping and computer animation techniques. Experimental results are presented, illustrating the efficiency gains reaped from using this method of progressive rendering. This visual importance-based rendering approach is expected to have applications in the entertainment industry, where image fidelity may be sacrificed for efficiency purposes, as long as the overall visual impression of the scene is maintained. Different aspects of the approach should find many other applications in image compression, image retrieval, progressive data transmission and active robotic vision
    corecore