10 research outputs found

    Optimisation du codage d'image par les critères d'information

    Get PDF
    Pour améliorer les performances de la chaîne de codage du type JPEG, nous proposons d'intégrer une opération de seuillage adaptatif avant l'étape de quantification, afin de raffiner le choix du quantificateur, soit de réduire l'erreur de déquantification. Nous proposons tout d'abord une méthode de sélection des seuils basée sur les critères d'information ensuite une autre méthode basée sur la modélisation statistique. Pour obéir au problème d'allocation binaire, nous utilisons une technique de quantification presque optimale basée sur l'approche Lagrangienne

    Élimination des artefacts pour un codage JPEG optimisé

    Get PDF
    Dans notre travail, nous avons traité le problème des effets de blocs qui apparaissent sur l'image reconstruite dans le cas d'une compression JPEG, en particulier à très bas débits. L'originalité consiste à appliquer une opération de lissage non linéaire sur l'image reconstruite basée sur les statistiques de l'image et d'utiliser un codeur JPEG permettant d'optimiser le compromis débit/distorsion avec un contrôle du débit. Les résultats de simulation montrent une amélioration au niveau de la qualité en terme de PSNR de l'ordre (0.2-0.8 dB) par rapport au même algorithme de compression sans lissage et une réduction importante des effets de blocs du point de vue visuel

    Global three‐dimensional‐mesh indexing based on structural analysis and geometrical signatures

    No full text
    This study presents a new local feature matching approach that relies upon Reeb graph (RG)‐based representation as well as a simple and accurate similarity estimation. The central contribution of this work is to reinforce the topological consistency conditions of the graph‐based description. Formally, the RGs are enriched with geometry signatures based on parameterisation approaches. After RG construction, the shape is segmented into Reeb charts of controlled topology mapped to its canonical planar domain. Then, two stretching signatures, corresponding to the area and angle distortion, are determined and taken as three‐dimensional‐shape descriptor. The similarity estimation is performed in two steps. The first one consists in forming the pairs of similar Reeb charts, according to the minimal distance between their corresponding signatures. The second step is to measure the global similarity which quantifies the similitude degree between all the matched Reeb charts. Retrieval experiments conducted on four publicly available databases have shown that the proposed matching scheme yields satisfactory results. Among observations, it can be noticed that despite its rapidity, the method provides an overall retrieval efficiency gain compared to very recent state‐of‐the‐art methods

    Phase-shifting digital holographic data compression

    No full text
    International audienceModern holography for 3D imaging allows to reconstruct all the parallaxes that are needed for a truly immersive visualisation. Nevertheless, it possess huge amount of data which induces higher transmission and storage requirements. To gain more popularity and acceptance, digital holography demands development of efficient coding schemes that provide significant data compression at low computation cost. Another issue that needs to be tackled when designing holography coding algorithms is interoperability with commonly used formats. In light of this, the upcoming JPEG Pleno standard aims to develop a standard framework for the representation and exchange of new imaging modalities such as holographic imaging while maintaining backward compatibility with the legacy JPEG decoders. This paper summarize the early work on lossy compression of computer graphic holograms and analyse the efficiency of additional methods that may exhibit good satisfactory coding performance while considering the backward compatibility with legacy JPEG decoders. To validate our findings, the results of our tests are shown and interpreted. Finally, we also outline the emerging trends for future researches

    Kinematic Reeb Graph Extraction Based on Heat Diffusion

    No full text
    International audienceThis paper presents a new approach of Reeb graph extraction adapted to 3D dynamic triangular Meshes. Particularly, we propose a new continuous scalar function, used for Reeb graph construction. This function is based on the heat diffusion properties. The restriction of the heat kernel to temporal domain makes the scalar function intrinsic and stable against perturbations. Due to the presence of neighborhood information in the heat kernel associated to each vertex, the proposed Reeb Graph extraction can be extremely useful as local shape descriptor for non-rigid shape retrieval. Experiments show that the proposed structural analysis technique achieves high accuracy and stability under topology changes and various perturbations through time

    Query-by-example HDR image retrieval based on CNN

    No full text
    International audienceDue to the expension of High Dynamic Range (HDR) imaging applications into various aspects of daily life, an efficient retrieval system, tailored to this type of data, has become a pressing challenge. In this paper, the reliability of Convolutional Neural Networks (CNN) descriptor and its investigation for HDR image retrieval are studied. The main idea consists in exploring the use of CNN to determine HDR image descriptor. Specifically, a Perceptually Uniform (PU) encoding is initially applied to the HDR content to map the luminance values in a perceptually uniform scale. Afterward, the CNN features, using Fully Connected (FC) layer activation, are extracted and classified by applying the Support Vector Machines (SVM) algorithm. Experimental evaluation demonstrates that the CNN descriptor, using the VGG19 network, achieves satisfactory results for describing HDR images on public available datasets such as PascalVoc2007, Cifar-10 and Wang. The experimental results also show that the features, after a PU processing, are more descriptive than those directly extracted from HDR contents. Finally, we show the superior performance of the proposed method against a recent state-of-the-art technique

    Amélioration des performances des systèmes de compression JPEG et JPEG2000

    No full text
    POITIERS-BU Sciences (861942102) / SudocSudocFranceF

    Summary Wavelet Domain Watermark Embedding Strategy using TTCQ Quantization

    No full text
    Invisible Digital watermarks have been proposed as a method for discouraging illicit copying and distribution of copyright material. Due to its characteristics, one of the problems in image watermarking is to decide how to hide in an image as many bits of information as possible while ensuring that the information can be correctly retrieved at the detecting stage, even after various attacks. Several approaches based on Discrete Wavelet Transform (DWT) have been proposed to address the problem of image watermarking. The advantage of DWT relative to the DCT is that it allows for localized watermarking of the image. The central contribution of this paper is to develop a watermarking algorithm, resilient to like lossy compression attack, by exploring the use of turbo trellis-coded quantization techniques (turbo TCQ) on the wavelet domain. Our results indicate that the proposed approach performs well against lossy wavelet-based compression attacks such as JPEG2000 and SPIHT. Key words: Wavelet transform, watermark embedding, TTCQ quantization, Image compression 1

    Fast partitioning depth decision based on skipping block sizes in HEVC

    No full text
    High Efficiency Video Coding (HEVC) is the latest video coding standard developed under joint collaboration of ITU-T VCEG and ISO/IEC MPEG, together under the name JCT-VC (Joint Collaborative Team on Video Coding) [1-2]. Recently, in February 2013, this video coding standard was issued by ITU as H.265 and by ISO/IEC as MPEG-H Part 2. The number of possible coding block sizes (block modes), and therefore its complexity, increased in comparison to its predecessor H.264/MPEG-4 AVC. For each 64x64 pixel coding unit (CU), there are 1361 CU partitions. In the encoding process, very few CU partitions are selected. In this paper, we avoid the calculation of most likely unselected CU partitions in Inter coding. Using TZ motion estimation algorithm, our fast partitioning depth decision mode algorithm saves 17% of the complexity, with negligible loss in PSNR and bit rate, in comparison to algorithm implemented in HEVC test model (HM) 10.0 encode
    corecore