146 research outputs found

    Zerotree-based stereoscopic video CODEC

    Get PDF
    Due to the provision of a more natural representation of a scene in the form of left and right eye views, a stereoscopic imaging system provides a more effective method for image/video display. Unfortunately the vast amount of information that must be transmitted/stored to represent a stereo image pair/video sequence, has so far hindered its use in commercial applications. However, by properly exploiting the spatial, temporal and binocular redundancy, a stereo image pair or a sequence could be compressed and transmitted through a single monocular channel’s bandwidth without unduly sacrificing the perceived stereoscopic image quality. We propose a timely and novel framework to transmit stereoscopic data efficiently. We propose a timely and novel framework to transmit stereoscopic data efficiently. We present a new technique for coding stereo video sequences based on discrete wavelet transform DWT technology. The proposed technique particularly exploits zerotree entropy ZTE coding that makes use of the wavelet block concept to achieve low bit rate stereo video coding. One of the two image streams, namely, the main stream, is independently coded by a zerotree video CODEC, while the second stream, namely, the auxiliary stream, is predicted based on disparity compensation. A zerotree video CODEC subsequently codes the residual stream. We compare the performance of the proposed CODEC with a discrete cosine transform DCT -based, modified MPEG-2 stereo video CODEC. We show that the proposed CODEC outperforms the benchmark CODEC in coding both main and auxiliary streams

    VLSI implementation of a massively parallel wavelet based zerotree coder for the intelligent pixel array

    Get PDF
    In the span of a few years, mobile multimedia communication has rapidly become a significant area of research and development constantly challenging boundaries on a variety of technologic fronts. Mobile video communications in particular encompasses a number of technical hurdles that generally steer technological advancements towards devices that are low in complexity, low in power usage yet perform the given task efficiently. Devices of this nature have been made available through the use of massively parallel processing arrays such as the Intelligent Pixel Processing Array. The Intelligent Pixel Processing array is a novel concept that integrates a parallel image capture mechanism, a parallel processing component and a parallel display component into a single chip solution geared toward mobile communications environments, be it a PDA based system or the video communicator wristwatch portrayed in Dick Tracy episodes. This thesis details work performed to provide an efficient, low power, low complexity solution surrounding the massively parallel implementation of a zerotree entropy codec for the Intelligent Pixel Array

    Problem-based learning (PBL) awareness among academic staff in Universiti Tun Hussein Onn Malaysia (UTHM)

    Get PDF
    The present study was conducted to determine whether the academic staff in UTHM was aware of Problem-based Learning (PBL) as an instructional approach. It was significant to identify if the academic staff in Universiti Tun Hussein Onn Malaysia (UTHM) had the knowledge about PBL. It was also crucial to know if the academic staff was aware of PBL as a method of teaching their courses in class as this could give the feedback to the university on the use of PBL among academic staff and measures to be taken to help improve their teaching experience. A workshop could also be designed if the academic staff in UTHM was interested to know more about PBL and how it could be used in their classroom. The objective of this study was to identify the awareness of PBL among academic staff in UTHM. This study was conducted via a quantitative method using a questionnaire adapted from the Awareness Questionnaire (AQ). 100 respondents were involved in this study. The findings indicated that the awareness of PBL among UTHM academic staff was moderate. It is a hope that more exposure could be done as PBL is seen as a promising approach in the learning process. In conclusion, the academic staff in UTHM has a moderate level of knowledge about PBL as a teaching methodology

    Embed[d]ed Zerotree Codec

    Get PDF
    This thesis discusses the findings of the final year project involving the VHDL (V= Very High Speed Integrated Circuit, Hardware Description Language) design and simulation of an EZT (Embedded Zero Tree) codec. The basis of image compression and the various image compression techniques that are available today have been explored. This provided a clear understanding of image compression as a whole. An in depth understanding of wavelet transform theory was vital to the understanding of the edge that this transform provides over other transforms for image compression. Both the mathematics of it and how it is implemented using sets of high pass and low pass filters have been studied and presented. At the heart of the EZT codec is the EZW (Embedded Zerotree Wavelet) algorithm, as this is the algorithm that has been implemented in the codec. This required a thorough study and understanding of the algorithm and the various terms used in it. A generic single processor codec capable of handling any size of zerotree coefficients of images was designed. Once the coding and decoding strategy of this single processor had been figured out, it was easily extended to a codec with three parallel processors. This parallel architecture uses the same coding and decoding methods as in the single processor except that each processor in the parallel processing now handles only a third of the coefficients, thus promising a much speedier codec as compared to the first one. Both designs were then translated into VHDL behavioral level codes. The codes were then simulated and the results were verified. Once the simulations were completed the next aim for the project, namely synthesizing the design, was embarked upon. Of the two logical parts of the encoder, only the significance map generator has been synthesized

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2-D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. [Continues.

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. In the proposed CODEC I, block-based disparity estimation/compensation (DE/DC) is performed in pixel domain. However, this results in an inefficiency when DWT is applied on the whole predictive error image that results from the DE process. This is because of the existence of artificial block boundaries between error blocks in the predictive error image. To overcome this problem, in the remaining proposed CODECs, DE/DC is performed in the wavelet domain. Due to the multiresolution nature of the wavelet domain, two methods of disparity estimation and compensation have been proposed. The first method is performing DEJDC in each subband of the lowest/coarsest resolution level and then propagating the disparity vectors obtained to the corresponding subbands of higher/finer resolution. Note that DE is not performed in every subband due to the high overhead bits that could be required for the coding of disparity vectors of all subbands. This method is being used in CODEC II. In the second method, DEJDC is performed m the wavelet-block domain. This enables disparity estimation to be performed m all subbands simultaneously without increasing the overhead bits required for the coding disparity vectors. This method is used by CODEC III. However, performing disparity estimation/compensation in all subbands would result in a significant improvement of CODEC III. To further improve the performance of CODEC ill, pioneering wavelet-block search technique is implemented in CODEC IV. The pioneering wavelet-block search technique enables the right/predicted image to be reconstructed at the decoder end without the need of transmitting the disparity vectors. In proposed CODEC V, pioneering block search is performed in all subbands of DWT decomposition which results in an improvement of its performance. Further, the CODEC IV and V are able to perform at very low bit rates(< 0.15 bpp). In CODEC VI and CODEC VII, Overlapped Block Disparity Compensation (OBDC) is used with & without the need of coding disparity vector. Our experiment results showed that no significant coding gains could be obtained for these CODECs over CODEC IV & V. All proposed CODECs m this thesis are wavelet-based stereo image coding algorithms that maximise the flexibility and benefits offered by wavelet transform technology when applied to stereo imaging. In addition the use of a baseline-JPEG coding architecture would enable the easy adaptation of the proposed algorithms within systems originally built for DCT-based coding. This is an important feature that would be useful during an era where DCT-based technology is only slowly being phased out to give way for DWT based compression technology. In addition, this thesis proposed a stereo image coding algorithm that uses JPEG-2000 technology as the basic compression engine. The proposed CODEC, named RASTER is a rate scalable stereo image CODEC that has a unique ability to preserve the image quality at binocular depth boundaries, which is an important requirement in the design of stereo image CODEC. The experimental results have shown that the proposed CODEC is able to achieve PSNR gains of up to 3.7 dB as compared to directly transmitting the right frame using JPEG-2000

    Distributed video coding for wireless video sensor networks: a review of the state-of-the-art architectures

    Get PDF
    Distributed video coding (DVC) is a relatively new video coding architecture originated from two fundamental theorems namely, Slepian–Wolf and Wyner–Ziv. Recent research developments have made DVC attractive for applications in the emerging domain of wireless video sensor networks (WVSNs). This paper reviews the state-of-the-art DVC architectures with a focus on understanding their opportunities and gaps in addressing the operational requirements and application needs of WVSNs

    A new, enhanced EZW image codec with subband classification

    Get PDF
    In this paper, an enhanced version of Embedded zerotree wavelet (EZW) image coding algorithm is proposed, referred to as EZW-SC. By exploiting a new principle that relies on a subband classification concept, the enhanced algorithm allows the prediction of insignificant subbands at early passes, along with the use of an improved significance map. This reduces the redundancy of zerotree symbols, speeds up the coding process and improves the coding of significant coefficients. In fact, the EZW-SC algorithm scans only significant subbands and significantly improves the lossy compression performance with the conventional EZW. Moreover, new EZW-based schemes are presented to perform colour image coding by taking advantage of the interdependency of the colour components. Experimental results show clear superiority of the proposed algorithms over the conventional EZW as well as other related EZW schemes at various bit rates in both greyscale and colour image compression
    • …
    corecore