871 research outputs found

    Perceptual Zero-Tree Coding with Efficient Optimization for Embedded Platforms

    Get PDF
    This study proposes a block-edge-based perceptual zero-tree coding (PZTC) method, which is implemented with efficientoptimization on the embedded platform. PZTC combines two novel compression concepts for coding efficiency and quality:block-edge detection (BED) and the low-complexity and low-memory entropy coder (LLEC). The proposed PZTC wasimplemented as a fixed-point version and optimized on the DSP-based platform based on both the presented platformindependentand platform-dependent optimization technologies. For platform-dependent optimization, this study examinesthe fixed-point PZTC and analyzes the complexity to optimize PZTC toward achieving an optimal coding efficiency.Furthermore, hardware-based platform-dependent optimizations are presented to reduce the memory size. Theperformance, such as compression quality and efficiency, is validated by experimental results

    Contemporary Affirmation of SPIHT Improvements in Image Coding

    Get PDF
    Set partitioning in hierarchal trees (SPIHT) is actually a widely-used compression algorithm for wavelet altered images. On most algorithms developed, SPIHT algorithm from the time its introduction in 1996 for image compression has got lots of interest. Though SPIHT is considerably simpler and efficient than several present compression methods since it's a completely inserted codec, provides good image quality, large PSNR, optimized for modern image transmission, efficient conjunction with error defense, form information on demand and hence element powerful error correction decreases from starting to finish but still it has some downsides that need to be taken away for its better use therefore since its development it has experienced many adjustments in its original model. This document presents a survey on several different improvements in SPIHT in certain fields as velocity, redundancy, quality, error resilience, sophistication, and compression ratio and memory requirement

    Image Processing Using FPGAs

    Get PDF
    This book presents a selection of papers representing current research on using field programmable gate arrays (FPGAs) for realising image processing algorithms. These papers are reprints of papers selected for a Special Issue of the Journal of Imaging on image processing using FPGAs. A diverse range of topics is covered, including parallel soft processors, memory management, image filters, segmentation, clustering, image analysis, and image compression. Applications include traffic sign recognition for autonomous driving, cell detection for histopathology, and video compression. Collectively, they represent the current state-of-the-art on image processing using FPGAs

    A configurable vector processor for accelerating speech coding algorithms

    Get PDF
    The growing demand for voice-over-packer (VoIP) services and multimedia-rich applications has made increasingly important the efficient, real-time implementation of low-bit rates speech coders on embedded VLSI platforms. Such speech coders are designed to substantially reduce the bandwidth requirements thus enabling dense multichannel gateways in small form factor. This however comes at a high computational cost which mandates the use of very high performance embedded processors. This thesis investigates the potential acceleration of two major ITU-T speech coding algorithms, namely G.729A and G.723.1, through their efficient implementation on a configurable extensible vector embedded CPU architecture. New scalar and vector ISAs were introduced which resulted in up to 80% reduction in the dynamic instruction count of both workloads. These instructions were subsequently encapsulated into a parametric, hybrid SISD (scalar processor)–SIMD (vector) processor. This work presents the research and implementation of the vector datapath of this vector coprocessor which is tightly-coupled to a Sparc-V8 compliant CPU, the optimization and simulation methodologies employed and the use of Electronic System Level (ESL) techniques to rapidly design SIMD datapaths

    Multiprocessor DSP Implementation of the JPEG 2000 Codec

    Get PDF
    The transition to JPEG2000 from other image formats such as standard JPEG offers im proved compression and image quality, yet has not been widely adopted in practice. This is mainly due to the complexity of the JPEG2000 algorithm. Standard JPEG uses the Discrete Cosine Transform (DCT) and Huffmann encoding to achieve its compression, whereas JPEG2000 uses the wavelet transform and arithmetic encoding. Due to the wide acceptance of JPEG, there are processors such as Equator Technology\u27s BSP-15 digital signal processor (DSP) that have been designed with features specifically for JPEG appli cations. For some of the current digital printing applications where JPEG is used, images must be encoded and decoded at rates exceeding 100 pages per minute. A multiprocessor environment consisting of Equator Technology\u27s BSP-15 processors may offer acceptable performance for the JPEG2000 codec. The aim of this work is to design a JPEG2000 codec for the BSP-15 processor and to determine if this processor is capable of delivering the performance required by high end digital printers. The features of the BSP-15 that are well suited for the JPEG2000 algorithm will be discussed, as well as future improvements that could be incorporated into the architecture. By analyzing the advantages and disadvantages of this processor, the next generation of processors may be able to offer features that will allow it to excel in JPEG2000 processing. A multiprocessor DSP implementation of the JPEG2000 codec is the main result of this work. The resulting codec is able to provide more than double the processing throughput of existing JPEG2000 software

    New Directions in Subband Coding

    Get PDF
    Two very different subband coders are described. The first is a modified dynamic bit-allocation-subband coder (D-SBC) designed for variable rate coding situations and easily adaptable to noisy channel environments. It can operate at rates as low as 12 kb/s and still give good quality speech. The second coder is a 16-kb/s waveform coder, based on a combination of subband coding and vector quantization (VQ-SBC). The key feature of this coder is its short coding delay, which makes it suitable for real-time communication networks. The speech quality of both coders has been enhanced by adaptive postfiltering. The coders have been implemented on a single AT&T DSP32 signal processo

    Real-time scalable video coding for surveillance applications on embedded architectures

    Get PDF

    Scalable and perceptual audio compression

    Get PDF
    This thesis deals with scalable perceptual audio compression. Two scalable perceptual solutions as well as a scalable to lossless solution are proposed and investigated. One of the scalable perceptual solutions is built around sinusoidal modelling of the audio signal whilst the other is built on a transform coding paradigm. The scalable coders are shown to scale both in a waveform matching manner as well as a psychoacoustic manner. In order to measure the psychoacoustic scalability of the systems investigated in this thesis, the similarity between the original signal\u27s psychoacoustic parameters and that of the synthesized signal are compared. The psychoacoustic parameters used are loudness, sharpness, tonahty and roughness. This analysis technique is a novel method used in this thesis and it allows an insight into the perceptual distortion that has been introduced by any coder analyzed in this manner
    • …
    corecore