38,648 research outputs found

    Stack-run adaptive wavelet image compression

    Get PDF
    We report on the development of an adaptive wavelet image coder based on stack-run representation of the quantized coefficients. The coder works by selecting an optimal wavelet packet basis for the given image and encoding the quantization indices for significant coefficients and zero runs between coefficients using a 4-ary arithmetic coder. Due to the fact that our coder exploits the redundancies present within individual subbands, its addressing complexity is much lower than that of the wavelet zerotree coding algorithms. Experimental results show coding gains of up to 1:4dB over the benchmark wavelet coding algorithm

    Wavelet Based Image Coding Schemes : A Recent Survey

    Full text link
    A variety of new and powerful algorithms have been developed for image compression over the years. Among them the wavelet-based image compression schemes have gained much popularity due to their overlapping nature which reduces the blocking artifacts that are common phenomena in JPEG compression and multiresolution character which leads to superior energy compaction with high quality reconstructed images. This paper provides a detailed survey on some of the popular wavelet coding techniques such as the Embedded Zerotree Wavelet (EZW) coding, Set Partitioning in Hierarchical Tree (SPIHT) coding, the Set Partitioned Embedded Block (SPECK) Coder, and the Embedded Block Coding with Optimized Truncation (EBCOT) algorithm. Other wavelet-based coding techniques like the Wavelet Difference Reduction (WDR) and the Adaptive Scanned Wavelet Difference Reduction (ASWDR) algorithms, the Space Frequency Quantization (SFQ) algorithm, the Embedded Predictive Wavelet Image Coder (EPWIC), Compression with Reversible Embedded Wavelet (CREW), the Stack-Run (SR) coding and the recent Geometric Wavelet (GW) coding are also discussed. Based on the review, recommendations and discussions are presented for algorithm development and implementation.Comment: 18 pages, 7 figures, journa

    A NOVEL BIO-INSPIRED STATIC IMAGE COMPRESSION SCHEME FOR NOISY DATA TRANSMISSION OVER LOW-BANDWIDTH CHANNELS

    Get PDF
    International audienceWe present a novel bio-inspired static image compression scheme. Our model is a combination of a simplified spiking retina model and well known data compression techniques. The fundamental hypothesis behind this work is that the mammalian retina generates an efficient neural code associated to the visual flux. The main novelty of this work is to show how this neural code can be exploited in the context of still image compression. Our model has three main stages. The first stage is the bio-inspired retina model proposed by Thorpe et al [1, 2], which transforms an image into a wave of spikes. This transform is based on the so-called rank order coding. In the second stage, we show how this wave of spikes can be expressed using a 4-ary dictionary alphabet, through a stack run coder. The third stage consists of applying a first order arithmetic coder to the stack run coded signal. We compare our results to JPEG standards and we show that our model has comparable performance for lower computational cost under strong bit rate restrictions when data is highly contaminated with noise. In addition, our model offers scalability for monitoring data transmission flow. The subject matter presented highlights a variety of important issues in the conception of novel bio-inspired compression schemes and additionally presents many potential avenues for future research efforts

    Non-expansive symmetrically extended wavelet transform for arbitrarily shaped video object plane.

    Get PDF
    by Lai Chun Kit.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 68-70).Abstract also in Chinese.ACKNOWLEDGMENTS --- p.IVABSTRACT --- p.vChapter Chapter 1 --- Traditional Image and Video Coding --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- Fundamental Principle of Compression --- p.1Chapter 1.3 --- Entropy - Value of Information --- p.2Chapter 1.4 --- Performance Measure --- p.3Chapter 1.5 --- Image Coding Overview --- p.4Chapter 1.5.1 --- Digital Image Formation --- p.4Chapter 1.5.2 --- Needs of Image Compression --- p.4Chapter 1.5.3 --- Classification of Image Compression --- p.5Chapter 1.5.4 --- Transform Coding --- p.6Chapter 1.6 --- Video Coding Overview --- p.8Chapter Chapter 2 --- Discrete Wavelets Transform (DWT) and Subband Coding --- p.11Chapter 2.1 --- Subband Coding --- p.11Chapter 2.1.1 --- Introduction --- p.11Chapter 2.1.2 --- Quadrature Mirror Filters (QMFs) --- p.12Chapter 2.1.3 --- Subband Coding for Image --- p.13Chapter 2.2 --- Discrete Wavelets Transformation (DWT) --- p.15Chapter 2.2.1 --- Introduction --- p.15Chapter 2.2.2 --- Wavelet Theory --- p.15Chapter 2.2.3 --- Comparison Between Fourier Transform and Wavelet Transform --- p.16Chapter Chapter 3 --- Non-expansive Symmetric Extension --- p.19Chapter 3.1 --- Introduction --- p.19Chapter 3.2 --- Types of extension scheme --- p.19Chapter 3.3 --- Non-expansive Symmetric Extension and Symmetric Sub-sampling --- p.21Chapter Chapter 4 --- Content-based Video Coding in MPEG-4 Purposed Standard --- p.24Chapter 4.1 --- Introduction --- p.24Chapter 4.2 --- Motivation of the new MPEG-4 standard --- p.25Chapter 4.2.1 --- Changes in the production of audio-visual material --- p.25Chapter 4.2.2 --- Changes in the consumption of multimedia information --- p.25Chapter 4.2.3 --- Reuse of audio-visual material --- p.26Chapter 4.2.4 --- Changes in mode of implementation --- p.26Chapter 4.3 --- Objective of MPEG-4 standard --- p.27Chapter 4.4 --- Technical Description of MPEG-4 --- p.28Chapter 4.4.1 --- Overview of MPEG-4 coding system --- p.28Chapter 4.4.2 --- Shape Coding --- p.29Chapter 4.4.3 --- Shape Adaptive Texture Coding --- p.33Chapter 4.4.4 --- Motion Estimation and Compensation (ME/MC) --- p.35Chapter Chapter 5 --- Shape Adaptive Wavelet Transformation Coding Scheme (SA WT) --- p.36Chapter 5.1 --- Shape Adaptive Wavelet Transformation --- p.36Chapter 5.1.1 --- Introduction --- p.36Chapter 5.1.2 --- Description of Transformation Scheme --- p.37Chapter 5.2 --- Quantization --- p.40Chapter 5.3 --- Entropy Coding --- p.42Chapter 5.3.1 --- Introduction --- p.42Chapter 5.3.2 --- Stack Run Algorithm --- p.42Chapter 5.3.3 --- ZeroTree Entropy (ZTE) Coding Algorithm --- p.45Chapter 5.4 --- Binary Shape Coding --- p.49Chapter Chapter 6 --- Simulation --- p.51Chapter 6.1 --- Introduction --- p.51Chapter 6.2 --- SSAWT-Stack Run --- p.52Chapter 6.3 --- SSAWT-ZTR --- p.53Chapter 6.4 --- Simulation Results --- p.55Chapter 6.4.1 --- SSAWT - STACK --- p.55Chapter 6.4.2 --- SSAWT ´ؤ ZTE --- p.56Chapter 6.4.3 --- Comparison Result - Cjpeg and Wave03. --- p.57Chapter 6.5 --- Shape Coding Result --- p.61Chapter 6.6 --- Analysis --- p.63Chapter Chapter 7 --- Conclusion --- p.64Appendix A: Image Segmentation --- p.65Reference --- p.6

    Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks

    Full text link
    We propose a method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG (4:2:0 ), WebP, JPEG2000, and JPEG as measured by MS-SSIM. We introduce three improvements over previous research that lead to this state-of-the-art result. First, we show that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to several metrics. Second, we modify the recurrent architecture to improve spatial diffusion, which allows the network to more effectively capture and propagate image information through the network's hidden state. Finally, in addition to lossless entropy coding, we use a spatially adaptive bit allocation algorithm to more efficiently use the limited number of bits to encode visually complex image regions. We evaluate our method on the Kodak and Tecnick image sets and compare against standard codecs as well recently published methods based on deep neural networks

    ERP implementation for an administrative agency as a corporative frontend and an e-commerce smartphone app

    Get PDF
    This document contains all the descriptions, arguments and demonstrations of the researches, analysis, reasoning, designs and tasks performed to achieve the requirement to technologically evolve an managing agency in a way that, through a solution that requires a reduced investment, makes possible to arrange a business management tool with e-commerce and also a mobile application that allows access and consultation of mentioned tool. The first part of the document describes the scenario in order to contextualize the project and introduces ERP (Enterprise Resources Planning). In the second part, a deep research of ERP market products is carried out, identifying the strengths and weaknesses of each one of the products in order to finish with the choice of the most suitable product for the scenario proposed in the project. A third part of the document describes the installation process of the selected product carried out based on the use of Dockers, as well as the configurations and customizations that they make on the selected ERP. A description of the installation and configuration of additional modules is also made, necessary to achieve the agreed scope of the project. In a fourth part of the thesis, the process of creating an iOS and Android App that connects to the selected ERP database is described. The process begins with the design of the App. Once designed, it is explained the process of study and documentation of technologies to choose the technology stack that allows making an application robust and contemporary without use of licensing. After choosing the technologies to use there are explained the dependencies and needs to install runtime enviornments prior to the start of coding. Later, it describes how the code of the App has been raised and developed. The compilation and verification mechanisms are indicated in continuation. And finally, it is showed the result of the development of the App once distributed. Finally, a chapter for the conclusions analyzes the difficulties encountered during the project and the achievements, analyzing what has been learned during the development of this project

    Abnormal Event Detection in Videos using Spatiotemporal Autoencoder

    Full text link
    We present an efficient method for detecting anomalies in videos. Recent applications of convolutional neural networks have shown promises of convolutional layers for object detection and recognition, especially in images. However, convolutional neural networks are supervised and require labels as learning signals. We propose a spatiotemporal architecture for anomaly detection in videos including crowded scenes. Our architecture includes two main components, one for spatial feature representation, and one for learning the temporal evolution of the spatial features. Experimental results on Avenue, Subway and UCSD benchmarks confirm that the detection accuracy of our method is comparable to state-of-the-art methods at a considerable speed of up to 140 fps
    corecore