941 research outputs found

    Contemporary Affirmation of SPIHT Improvements in Image Coding

    Get PDF
    Set partitioning in hierarchal trees (SPIHT) is actually a widely-used compression algorithm for wavelet altered images. On most algorithms developed, SPIHT algorithm from the time its introduction in 1996 for image compression has got lots of interest. Though SPIHT is considerably simpler and efficient than several present compression methods since it's a completely inserted codec, provides good image quality, large PSNR, optimized for modern image transmission, efficient conjunction with error defense, form information on demand and hence element powerful error correction decreases from starting to finish but still it has some downsides that need to be taken away for its better use therefore since its development it has experienced many adjustments in its original model. This document presents a survey on several different improvements in SPIHT in certain fields as velocity, redundancy, quality, error resilience, sophistication, and compression ratio and memory requirement

    Robust Transmission of Images Based on JPEG2000 Using Edge Information

    Get PDF
    In multimedia communication and data storage, compression of data is essential to speed up the transmission rate, minimize the use of channel bandwidth, and minimize storage space. JPEG2000 is the new standard for image compression for transmission and storage. The drawback of Compression is that compressed data are more vulnerable to channel noise during transmission. Previous techniques for error concealment are classified into three groups depending on the Approach employed by the encoder and decoder: Forward Error Concealment, Error Concealment by Post Processing and Interactive Error Concealment. The objective of this thesis is to develop a Concealment methodology that has the capability of both error detection and concealment, be Compatible with the JPEG2000 standard, and guarantees minimum use of channel bandwidth. A new methodology is developed to detect corrupted regions/coefficients in the received Images the edge information. The methodology requires transmission of edge information of wavelet coefficients of the original image along with JPEG2000 compressed image. At the receiver, the edge information of received wavelet coefficients is computed and compared with the received edge information of the original image to determine the corrupted coefficients. Three methods of concealment, each including a filter, are investigated to handle the corrupted regions/coefficients. MATLABâ„¢ functions are developed that simulate channel noise, image transmission Using JPEG2000 standard and the proposed methodology. The objective quality measure such as Peak-signal-to-noise ratio (PSNR), root-mean-square error (rms) and subjective quality Measure are used to evaluate processed images. The simulation results are presented to demonstrate The performance of the proposed methodology. The results are also compared with recent approaches Found in the literature. Based on performance of the proposed approach, it is claimed that the Proposed approach can be successfully used in wireless and Internet communications

    Embedding Authentication and DistortionConcealment in Images – A Noisy Channel Perspective

    Get PDF
    In multimedia communication, compression of data is essential to improve transmission rate, and minimize storage space. At the same time, authentication of transmitted data is equally important to justify all these activities. The drawback of compression is that the compressed data are vulnerable to channel noise. In this paper, error concealment methodologies with ability of error detection and concealment are investigated for integration with image authentication in JPEG2000.The image authentication includes digital signature extraction and its diffusion as a watermark. To tackle noise, the error concealment technologies are modified to include edge information of the original image.This edge_image is transmitted along with JPEG2000 compressed image to determine corrupted coefficients and regions. The simulation results are conducted on test images for different values of bit error rate to judge confidence in noise reduction within the received images

    Research and developments of distributed video coding

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The recent developed Distributed Video Coding (DVC) is typically suitable for the applications such as wireless/wired video sensor network, mobile camera etc. where the traditional video coding standard is not feasible due to the constrained computation at the encoder. With DVC, the computational burden is moved from encoder to decoder. The compression efficiency is achieved via joint decoding at the decoder. The practical application of DVC is referred to Wyner-Ziv video coding (WZ) where the side information is available at the decoder to perform joint decoding. This join decoding inevitably causes a very complex decoder. In current WZ video coding issues, many of them emphasise how to improve the system coding performance but neglect the huge complexity caused at the decoder. The complexity of the decoder has direct influence to the system output. The beginning period of this research targets to optimise the decoder in pixel domain WZ video coding (PDWZ), while still achieves similar compression performance. More specifically, four issues are raised to optimise the input block size, the side information generation, the side information refinement process and the feedback channel respectively. The transform domain WZ video coding (TDWZ) has distinct superior performance to the normal PDWZ due to the exploitation in spatial direction during the encoding. However, since there is no motion estimation at the encoder in WZ video coding, the temporal correlation is not exploited at all at the encoder in all current WZ video coding issues. In the middle period of this research, the 3D DCT is adopted in the TDWZ to remove redundancy in both spatial and temporal direction thus to provide even higher coding performance. In the next step of this research, the performance of transform domain Distributed Multiview Video Coding (DMVC) is also investigated. Particularly, three types transform domain DMVC frameworks which are transform domain DMVC using TDWZ based 2D DCT, transform domain DMVC using TDWZ based on 3D DCT and transform domain residual DMVC using TDWZ based on 3D DCT are investigated respectively. One of the important applications of WZ coding principle is error-resilience. There have been several attempts to apply WZ error-resilient coding for current video coding standard e.g. H.264/AVC or MEPG 2. The final stage of this research is the design of WZ error-resilient scheme for wavelet based video codec. To balance the trade-off between error resilience ability and bandwidth consumption, the proposed scheme emphasises the protection of the Region of Interest (ROI) area. The efficiency of bandwidth utilisation is achieved by mutual efforts of WZ coding and sacrificing the quality of unimportant area. In summary, this research work contributed to achieves several advances in WZ video coding. First of all, it is targeting to build an efficient PDWZ with optimised decoder. Secondly, it aims to build an advanced TDWZ based on 3D DCT, which then is applied into multiview video coding to realise advanced transform domain DMVC. Finally, it aims to design an efficient error-resilient scheme for wavelet video codec, with which the trade-off between bandwidth consumption and error-resilience can be better balanced

    Reliable Linear, Sesquilinear and Bijective Operations On Integer Data Streams Via Numerical Entanglement

    Get PDF
    A new technique is proposed for fault-tolerant linear, sesquilinear and bijective (LSB) operations on MM integer data streams (M≥3M\geq3), such as: scaling, additions/subtractions, inner or outer vector products, permutations and convolutions. In the proposed method, the MM input integer data streams are linearly superimposed to form MM numerically-entangled integer data streams that are stored in-place of the original inputs. A series of LSB operations can then be performed directly using these entangled data streams. The results are extracted from the MM entangled output streams by additions and arithmetic shifts. Any soft errors affecting any single disentangled output stream are guaranteed to be detectable via a specific post-computation reliability check. In addition, when utilizing a separate processor core for each of the MM streams, the proposed approach can recover all outputs after any single fail-stop failure. Importantly, unlike algorithm-based fault tolerance (ABFT) methods, the number of operations required for the entanglement, extraction and validation of the results is linearly related to the number of the inputs and does not depend on the complexity of the performed LSB operations. We have validated our proposal in an Intel processor (Haswell architecture with AVX2 support) via fast Fourier transforms, circular convolutions, and matrix multiplication operations. Our analysis and experiments reveal that the proposed approach incurs between 0.03%0.03\% to 7%7\% reduction in processing throughput for a wide variety of LSB operations. This overhead is 5 to 1000 times smaller than that of the equivalent ABFT method that uses a checksum stream. Thus, our proposal can be used in fault-generating processor hardware or safety-critical applications, where high reliability is required without the cost of ABFT or modular redundancy.Comment: to appear in IEEE Trans. on Signal Processing, 201

    Video Transmission over MIMO-OFDM System: MDC and Space-Time Coding-Based Approaches

    Get PDF
    MIMO-OFDM is a promising technique for the broadband wireless communication system. In this paper, we propose a novel scheme that integrates multiple-description coding (MDC), error-resilient video coding, and unequal error protection strategy with hybrid space-time coding structure for robust video transmission over MIMO-OFDM system. The proposed MDC coder generates multiple bitstreams of equal importance which are very suitable for multiple-antennas system. Furthermore, according to the contribution to the reconstructed video quality, we apply unequal error protection strategy using BLAST and STBC space-time codes for each video bitstream. Experimental results have demonstrated that the proposed scheme can be an excellent alternative to achieve desired tradeoff between the reconstructed video quality and the transmission efficiency

    A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images

    Get PDF
    Predictive coding is attractive for compression onboard of spacecrafts thanks to its low computational complexity, modest memory requirements and the ability to accurately control quality on a pixel-by-pixel basis. Traditionally, predictive compression focused on the lossless and near-lossless modes of operation where the maximum error can be bounded but the rate of the compressed image is variable. Rate control is considered a challenging problem for predictive encoders due to the dependencies between quantization and prediction in the feedback loop, and the lack of a signal representation that packs the signal's energy into few coefficients. In this paper, we show that it is possible to design a rate control scheme intended for onboard implementation. In particular, we propose a general framework to select quantizers in each spatial and spectral region of an image so as to achieve the desired target rate while minimizing distortion. The rate control algorithm allows to achieve lossy, near-lossless compression, and any in-between type of compression, e.g., lossy compression with a near-lossless constraint. While this framework is independent of the specific predictor used, in order to show its performance, in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless compression standard, obtaining an extension that allows to perform lossless, near-lossless and lossy compression in a single package. We show that the rate controller has excellent performance in terms of accuracy in the output rate, rate-distortion characteristics and is extremely competitive with respect to state-of-the-art transform coding

    Spread spectrum-based video watermarking algorithms for copyright protection

    Get PDF
    Merged with duplicate record 10026.1/2263 on 14.03.2017 by CS (TIS)Digital technologies know an unprecedented expansion in the last years. The consumer can now benefit from hardware and software which was considered state-of-the-art several years ago. The advantages offered by the digital technologies are major but the same digital technology opens the door for unlimited piracy. Copying an analogue VCR tape was certainly possible and relatively easy, in spite of various forms of protection, but due to the analogue environment, the subsequent copies had an inherent loss in quality. This was a natural way of limiting the multiple copying of a video material. With digital technology, this barrier disappears, being possible to make as many copies as desired, without any loss in quality whatsoever. Digital watermarking is one of the best available tools for fighting this threat. The aim of the present work was to develop a digital watermarking system compliant with the recommendations drawn by the EBU, for video broadcast monitoring. Since the watermark can be inserted in either spatial domain or transform domain, this aspect was investigated and led to the conclusion that wavelet transform is one of the best solutions available. Since watermarking is not an easy task, especially considering the robustness under various attacks several techniques were employed in order to increase the capacity/robustness of the system: spread-spectrum and modulation techniques to cast the watermark, powerful error correction to protect the mark, human visual models to insert a robust mark and to ensure its invisibility. The combination of these methods led to a major improvement, but yet the system wasn't robust to several important geometrical attacks. In order to achieve this last milestone, the system uses two distinct watermarks: a spatial domain reference watermark and the main watermark embedded in the wavelet domain. By using this reference watermark and techniques specific to image registration, the system is able to determine the parameters of the attack and revert it. Once the attack was reverted, the main watermark is recovered. The final result is a high capacity, blind DWr-based video watermarking system, robust to a wide range of attacks.BBC Research & Developmen
    • …
    corecore