26 research outputs found

    Non-expansive symmetrically extended wavelet transform for arbitrarily shaped video object plane.

    Get PDF
    by Lai Chun Kit.Thesis (M.Phil.)--Chinese University of Hong Kong, 1998.Includes bibliographical references (leaves 68-70).Abstract also in Chinese.ACKNOWLEDGMENTS --- p.IVABSTRACT --- p.vChapter Chapter 1 --- Traditional Image and Video Coding --- p.1Chapter 1.1 --- Introduction --- p.1Chapter 1.2 --- Fundamental Principle of Compression --- p.1Chapter 1.3 --- Entropy - Value of Information --- p.2Chapter 1.4 --- Performance Measure --- p.3Chapter 1.5 --- Image Coding Overview --- p.4Chapter 1.5.1 --- Digital Image Formation --- p.4Chapter 1.5.2 --- Needs of Image Compression --- p.4Chapter 1.5.3 --- Classification of Image Compression --- p.5Chapter 1.5.4 --- Transform Coding --- p.6Chapter 1.6 --- Video Coding Overview --- p.8Chapter Chapter 2 --- Discrete Wavelets Transform (DWT) and Subband Coding --- p.11Chapter 2.1 --- Subband Coding --- p.11Chapter 2.1.1 --- Introduction --- p.11Chapter 2.1.2 --- Quadrature Mirror Filters (QMFs) --- p.12Chapter 2.1.3 --- Subband Coding for Image --- p.13Chapter 2.2 --- Discrete Wavelets Transformation (DWT) --- p.15Chapter 2.2.1 --- Introduction --- p.15Chapter 2.2.2 --- Wavelet Theory --- p.15Chapter 2.2.3 --- Comparison Between Fourier Transform and Wavelet Transform --- p.16Chapter Chapter 3 --- Non-expansive Symmetric Extension --- p.19Chapter 3.1 --- Introduction --- p.19Chapter 3.2 --- Types of extension scheme --- p.19Chapter 3.3 --- Non-expansive Symmetric Extension and Symmetric Sub-sampling --- p.21Chapter Chapter 4 --- Content-based Video Coding in MPEG-4 Purposed Standard --- p.24Chapter 4.1 --- Introduction --- p.24Chapter 4.2 --- Motivation of the new MPEG-4 standard --- p.25Chapter 4.2.1 --- Changes in the production of audio-visual material --- p.25Chapter 4.2.2 --- Changes in the consumption of multimedia information --- p.25Chapter 4.2.3 --- Reuse of audio-visual material --- p.26Chapter 4.2.4 --- Changes in mode of implementation --- p.26Chapter 4.3 --- Objective of MPEG-4 standard --- p.27Chapter 4.4 --- Technical Description of MPEG-4 --- p.28Chapter 4.4.1 --- Overview of MPEG-4 coding system --- p.28Chapter 4.4.2 --- Shape Coding --- p.29Chapter 4.4.3 --- Shape Adaptive Texture Coding --- p.33Chapter 4.4.4 --- Motion Estimation and Compensation (ME/MC) --- p.35Chapter Chapter 5 --- Shape Adaptive Wavelet Transformation Coding Scheme (SA WT) --- p.36Chapter 5.1 --- Shape Adaptive Wavelet Transformation --- p.36Chapter 5.1.1 --- Introduction --- p.36Chapter 5.1.2 --- Description of Transformation Scheme --- p.37Chapter 5.2 --- Quantization --- p.40Chapter 5.3 --- Entropy Coding --- p.42Chapter 5.3.1 --- Introduction --- p.42Chapter 5.3.2 --- Stack Run Algorithm --- p.42Chapter 5.3.3 --- ZeroTree Entropy (ZTE) Coding Algorithm --- p.45Chapter 5.4 --- Binary Shape Coding --- p.49Chapter Chapter 6 --- Simulation --- p.51Chapter 6.1 --- Introduction --- p.51Chapter 6.2 --- SSAWT-Stack Run --- p.52Chapter 6.3 --- SSAWT-ZTR --- p.53Chapter 6.4 --- Simulation Results --- p.55Chapter 6.4.1 --- SSAWT - STACK --- p.55Chapter 6.4.2 --- SSAWT ´ؤ ZTE --- p.56Chapter 6.4.3 --- Comparison Result - Cjpeg and Wave03. --- p.57Chapter 6.5 --- Shape Coding Result --- p.61Chapter 6.6 --- Analysis --- p.63Chapter Chapter 7 --- Conclusion --- p.64Appendix A: Image Segmentation --- p.65Reference --- p.6

    High-performance compression of visual information - A tutorial review - Part I : Still Pictures

    Get PDF
    Digital images have become an important source of information in the modern world of communication systems. In their raw form, digital images require a tremendous amount of memory. Many research efforts have been devoted to the problem of image compression in the last two decades. Two different compression categories must be distinguished: lossless and lossy. Lossless compression is achieved if no distortion is introduced in the coded image. Applications requiring this type of compression include medical imaging and satellite photography. For applications such as video telephony or multimedia applications, some loss of information is usually tolerated in exchange for a high compression ratio. In this two-part paper, the major building blocks of image coding schemes are overviewed. Part I covers still image coding, and Part II covers motion picture sequences. In this first part, still image coding schemes have been classified into predictive, block transform, and multiresolution approaches. Predictive methods are suited to lossless and low-compression applications. Transform-based coding schemes achieve higher compression ratios for lossy compression but suffer from blocking artifacts at high-compression ratios. Multiresolution approaches are suited for lossy as well for lossless compression. At lossy high-compression ratios, the typical artifact visible in the reconstructed images is the ringing effect. New applications in a multimedia environment drove the need for new functionalities of the image coding schemes. For that purpose, second-generation coding techniques segment the image into semantically meaningful parts. Therefore, parts of these methods have been adapted to work for arbitrarily shaped regions. In order to add another functionality, such as progressive transmission of the information, specific quantization algorithms must be defined. A final step in the compression scheme is achieved by the codeword assignment. Finally, coding results are presented which compare stateof- the-art techniques for lossy and lossless compression. The different artifacts of each technique are highlighted and discussed. Also, the possibility of progressive transmission is illustrated

    Object-Based Unequal Error Protection

    Get PDF
    This thesis presents a comparison of two methods for Object-Based Unequal Error Protection. The two methods, Combined Unequal Error Protection and Individual Unequal Error Protection, add Forward Error Correcting Codes to embeddedly coded objects of an image. This is done so that each byte within an object is protected according to its importance, has object-level of error protection proportional to the object\u27s importance to the reconstructed quality, and has random access at the receiver. It is found that random access to the objects is obtained at a cost in terms of quality

    Digital Image Access & Retrieval

    Get PDF
    The 33th Annual Clinic on Library Applications of Data Processing, held at the University of Illinois at Urbana-Champaign in March of 1996, addressed the theme of "Digital Image Access & Retrieval." The papers from this conference cover a wide range of topics concerning digital imaging technology for visual resource collections. Papers covered three general areas: (1) systems, planning, and implementation; (2) automatic and semi-automatic indexing; and (3) preservation with the bulk of the conference focusing on indexing and retrieval.published or submitted for publicatio

    Combined Industry, Space and Earth Science Data Compression Workshop

    Get PDF
    The sixth annual Space and Earth Science Data Compression Workshop and the third annual Data Compression Industry Workshop were held as a single combined workshop. The workshop was held April 4, 1996 in Snowbird, Utah in conjunction with the 1996 IEEE Data Compression Conference, which was held at the same location March 31 - April 3, 1996. The Space and Earth Science Data Compression sessions seek to explore opportunities for data compression to enhance the collection, analysis, and retrieval of space and earth science data. Of particular interest is data compression research that is integrated into, or has the potential to be integrated into, a particular space or earth science data information system. Preference is given to data compression research that takes into account the scien- tist's data requirements, and the constraints imposed by the data collection, transmission, distribution and archival systems

    SPIHT image coding : analysis, improvements and applications.

    Get PDF
    Image compression plays an important role in image storage and transmission. In the popular Internet applications and mobile communications, image coding is required to be not only efficient but also scalable. Recent wavelet techniques provide a way for efficient and scalable image coding. SPIHT (set partitioning in hierarchical trees) is such an algorithm based on wavelet transform. This thesis analyses and improves the SPIHT algorithm. The preliminary part of the thesis investigates two-dimensional multi-resolution decomposition for image coding using the wavelet transform, which is reviewed and analysed systematically. The wavelet transform is implemented using filter banks, and the z-domain proofs are given for the key implementation steps. A scheme of wavelet transform for arbitrarily sized images is proposed. The statistical properties of the wavelet coefficients (being the output of the wavelet transform) are explored for natural images. The energy in the transform domain is localised and highly concentrated on the low-resolution subband. The wavelet coefficients are DC-biased, and the gravity centre of most octave-segmented value sections (which are relevant to the binary bit-planes) is offset by approximately one eighth of the section range from the geometrical centre. The intra-subband correlation coefficients are the largest, followed by the inter-level correlation coefficients in the middle then the trivial inter-subband correlation coefficients on the same resolution level. The statistical properties reveal the success of the SPIHT algorithm, and lead to further improvements. The subsequent parts of the thesis examine the SPIHT algorithm. The concepts of successive approximation quantisation and ordered bit-plane coding are highlighted. The procedure of SPIHT image coding is demonstrated with a simple example. A solution for arbitrarily sized images is proposed. Seven measures are proposed to improve the SPIHT algorithm. Three DC-level shifting schemes are discussed, and the one subtracting the geometrical centre in the image domain is selected in the thesis. The virtual trees are introduced to hold more wavelet coefficients in each of the initial sets. A scheme is proposed to reduce the redundancy in the coding bit-stream by omitting the predictable symbols. The quantisation of wavelet coefficients is offset by one eighth from the geometrical centre. A pre-processing technique is proposed to speed up the judgement of the significance of trees, and a smoothing is imposed on the magnitude of the wavelet coefficients during the pre-processing for lossy image coding. The optimisation of arithmetic coding is also discussed. Experimental results show that these improvements to SPIHT get a significant performance gain. The running time is reduced by up to a half. The PSNR (peak signal to noise ratio) is improved a lot at very low bit rates, up to 12 dB in the extreme case. Moderate improvements are also made at high bit rates. The SPIHT algorithm is applied to loss less image coding. Various wavelet transforms are evaluated for lossless SPIHT image coding. Experimental results show that the interpolating transform (4, 4) and the S+P transform (2+2, 2) are the best for natural images among the transforms used, the interpolating transform (4, 2) is the best for CT images, and the bi-orthogonal transform (9, 7) is always the worst. Content-based lossless coding of a CT head image is presented in the thesis, using segmentation and SPIHT. Although the performance gain is limited in the experiments, it shows the potential advantage of content-based image coding

    Transform domain texture synthesis on surfaces

    Get PDF
    In the recent past application areas such as virtual reality experiences, digital cinema and computer gamings have resulted in a renewed interest in advanced research topics in computer graphics. Although many research challenges in computer graphics have been met due to worldwide efforts, many more are yet to be met. Two key challenges which still remain open research problems are, the lack of perfect realism in animated/virtually-created objects when represented in graphical format and the need for the transmissiim/storage/exchange of a massive amount of information in between remote locations, when 3D computer generated objects are used in remote visualisations. These challenges call for further research to be focused in the above directions. Though a significant amount of ideas have been proposed by the international research community in their effort to meet the above challenges, the ideas still suffer from excessive complexity related issues resulting in high processing times and their practical inapplicability when bandwidth constraint transmission mediums are used or when the storage space or computational power of the display device is limited. In the proposed work we investigate the appropriate use of geometric representations of 3D structure (e.g. Bezier surface, NURBS, polygons) and multi-resolution, progressive representation of texture on such surfaces. This joint approach to texture synthesis has not been considered before and has significant potential in resolving current challenges in virtual realism, digital cinema and computer gaming industry. The main focus of the novel approaches that are proposed in this thesis is performing photo-realistic texture synthesis on surfaces. We have provided experimental results and detailed analysis to prove that the proposed algorithms allow fast, progressive building of texture on arbitrarily shaped 3D surfaces. In particular we investigate the above ideas in association with Bezier patch representation of 3D objects, an approach which has not been considered so far by any published world wide research effort, yet has flexibility of utmost practical importance. Further we have discussed the novel application domains that can be served by the inclusion of additional functionality within the proposed algorithms.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Robust and efficient video/image transmission

    Get PDF
    The Internet has become a primary medium for information transmission. The unreliability of channel conditions, limited channel bandwidth and explosive growth of information transmission requests, however, hinder its further development. Hence, research on robust and efficient delivery of video/image content is demanding nowadays. Three aspects of this task, error burst correction, efficient rate allocation and random error protection are investigated in this dissertation. A novel technique, called successive packing, is proposed for combating multi-dimensional (M-D) bursts of errors. A new concept of basis interleaving array is introduced. By combining different basis arrays, effective M-D interleaving can be realized. It has been shown that this algorithm can be implemented only once and yet optimal for a set of error bursts having different sizes for a given two-dimensional (2-D) array. To adapt to variable channel conditions, a novel rate allocation technique is proposed for FineGranular Scalability (FGS) coded video, in which real data based rate-distortion modeling is developed, constant quality constraint is adopted and sliding window approach is proposed to adapt to the variable channel conditions. By using the proposed technique, constant quality is realized among frames by solving a set of linear functions. Thus, significant computational simplification is achieved compared with the state-of-the-art techniques. The reduction of the overall distortion is obtained at the same time. To combat the random error during the transmission, an unequal error protection (UEP) method and a robust error-concealment strategy are proposed for scalable coded video bitstreams

    A family of stereoscopic image compression algorithms using wavelet transforms

    Get PDF
    With the standardization of JPEG-2000, wavelet-based image and video compression technologies are gradually replacing the popular DCT-based methods. In parallel to this, recent developments in autostereoscopic display technology is now threatening to revolutionize the way in which consumers are used to enjoying the traditional 2-D display based electronic media such as television, computer and movies. However, due to the two-fold bandwidth/storage space requirement of stereoscopic imaging, an essential requirement of a stereo imaging system is efficient data compression. In this thesis, seven wavelet-based stereo image compression algorithms are proposed, to take advantage of the higher data compaction capability and better flexibility of wavelets. [Continues.
    corecore