15,109 research outputs found

    Fast image decompression for telebrowsing of images

    Get PDF
    Progressive image transmission (PIT) is often used to reduce the transmission time of an image telebrowsing system. A side effect of the PIT is the increase of computational complexity at the viewer's site. This effect is more serious in transform domain techniques than in other techniques. Recent attempts to reduce the side effect are futile as they create another side effect, namely, the discontinuous and unpleasant image build-up. Based on a practical assumption that image blocks to be inverse transformed are generally sparse, this paper presents a method to minimize both side effects simultaneously

    Static 3D Triangle Mesh Compression Overview

    Get PDF
    3D triangle meshes are extremely used to model discrete surfaces, and almost always represented with two tables: one for geometry and another for connectivity. While the raw size of a triangle mesh is of around 200 bits per vertex, by coding cleverly (and separately) those two distinct kinds of information it is possible to achieve compression ratios of 15:1 or more. Different techniques must be used depending on whether single-rate vs. progressive bitstreams are sought; and, in the latter case, on whether or not hierarchically nested meshes are desirable during reconstructio

    Role of homeostasis in learning sparse representations

    Full text link
    Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components

    Implementation issues in source coding

    Get PDF
    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated

    Streaming an image through the eye: The retina seen as a dithered scalable image coder

    Get PDF
    We propose the design of an original scalable image coder/decoder that is inspired from the mammalians retina. Our coder accounts for the time-dependent and also nondeterministic behavior of the actual retina. The present work brings two main contributions: As a first step, (i) we design a deterministic image coder mimicking most of the retinal processing stages and then (ii) we introduce a retinal noise in the coding process, that we model here as a dither signal, to gain interesting perceptual features. Regarding our first contribution, our main source of inspiration will be the biologically plausible model of the retina called Virtual Retina. The main novelty of this coder is to show that the time-dependent behavior of the retina cells could ensure, in an implicit way, scalability and bit allocation. Regarding our second contribution, we reconsider the inner layers of the retina. We emit a possible interpretation for the non-determinism observed by neurophysiologists in their output. For this sake, we model the retinal noise that occurs in these layers by a dither signal. The dithering process that we propose adds several interesting features to our image coder. The dither noise whitens the reconstruction error and decorrelates it from the input stimuli. Furthermore, integrating the dither noise in our coder allows a faster recognition of the fine details of the image during the decoding process. Our present paper goal is twofold. First, we aim at mimicking as closely as possible the retina for the design of a novel image coder while keeping encouraging performances. Second, we bring a new insight concerning the non-deterministic behavior of the retina.Comment: arXiv admin note: substantial text overlap with arXiv:1104.155

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    Wavelet Based Image Coding Schemes : A Recent Survey

    Full text link
    A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.Comment: 18 pages, 7 figures, journa
    • …
    corecore