73 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Efficient Fractal Image Coding using Fast Fourier Transform

    Get PDF
    The fractal coding is a novel technique forimage compression. Though the technique has manyattractive features, the large encoding time makes itunsuitable for real time applications. In this paper, anefficient algorithm for fractal encoding which operateson entire domain image instead of overlapping domainblocks is presented.The algorithm drastically reducesthe encoding time as compared to classical full searchmethod. The reduction in encoding time is mainly dueto use of modified crosscorrelation based similaritymeasure. The implemented algorithm employs exhaustivesearch of domain blocks and their isometry transformationsto investigate their similarity with everyrange block. The application of Fast Fourier Transformin similarity measure calculation speeds up theencoding process. The proposed eight isometry transformationsof a domain block exploit the properties ofDiscrete Fourier Transform to minimize the number ofFast Fourier Transform calculations. The experimentalstudies on the proposed algorithm demonstrate that theencoding time is reduced drastically with average speedupfactor of 538 with respect to the classical fullsearch method with comparable values of Peak SignalTo Noise Ratio

    Transform coding of pictorial data

    Get PDF
    By using transform coding, image transmission rates as low as 0.5 bit/pel can be achieved. Generally, the bit rate reduction is achieved by allocating fewer bits to low energy high order coefficients, However, to ensure reasonably good picture quality, a large number of bits has to be allocated to high energy dc coefficients for both fine quantization and good channel error immunity, A technique has been developed that, in some cases, allows the de coefficients to be estimated at the receiver, thus eliminating a major source of difficulty with respect to channel errors. [Continues.

    High efficiency block coding techniques for image data.

    Get PDF
    by Lo Kwok-tung.Thesis (Ph.D.)--Chinese University of Hong Kong, 1992.Includes bibliographical references.ABSTRACT --- p.iACKNOWLEDGEMENTS --- p.iiiLIST OF PRINCIPLE SYMBOLS AND ABBREVIATIONS --- p.ivLIST OF FIGURES --- p.viiLIST OF TABLES --- p.ixTABLE OF CONTENTS --- p.xChapter CHAPTER 1 --- IntroductionChapter 1.1 --- Background - The Need for Image Compression --- p.1-1Chapter 1.2 --- Image Compression - An Overview --- p.1-2Chapter 1.2.1 --- Predictive Coding - DPCM --- p.1-3Chapter 1.2.2 --- Sub-band Coding --- p.1-5Chapter 1.2.3 --- Transform Coding --- p.1-6Chapter 1.2.4 --- Vector Quantization --- p.1-8Chapter 1.2.5 --- Block Truncation Coding --- p.1-10Chapter 1.3 --- Block Based Image Coding Techniques --- p.1-11Chapter 1.4 --- Goal of the Work --- p.1-13Chapter 1.5 --- Organization of the Thesis --- p.1-14Chapter CHAPTER 2 --- Block-Based Image Coding TechniquesChapter 2.1 --- Statistical Model of Image --- p.2-1Chapter 2.1.1 --- One-Dimensional Model --- p.2-1Chapter 2.1.2 --- Two-Dimensional Model --- p.2-2Chapter 2.2 --- Image Fidelity Criteria --- p.2-3Chapter 2.2.1 --- Objective Fidelity --- p.2-3Chapter 2.2.2 --- Subjective Fidelity --- p.2-5Chapter 2.3 --- Transform Coding Theroy --- p.2-6Chapter 2.3.1 --- Transformation --- p.2-6Chapter 2.3.2 --- Quantization --- p.2-10Chapter 2.3.3 --- Coding --- p.2-12Chapter 2.3.4 --- JPEG International Standard --- p.2-14Chapter 2.4 --- Vector Quantization Theory --- p.2-18Chapter 2.4.1 --- Codebook Design and the LBG Clustering Algorithm --- p.2-20Chapter 2.5 --- Block Truncation Coding Theory --- p.2-22Chapter 2.5.1 --- Optimal MSE Block Truncation Coding --- p.2-24Chapter CHAPTER 3 --- Development of New Orthogonal TransformsChapter 3.1 --- Introduction --- p.3-1Chapter 3.2 --- Weighted Cosine Transform --- p.3-4Chapter 3.2.1 --- Development of the WCT --- p.3-6Chapter 3.2.2 --- Determination of a and β --- p.3-9Chapter 3.3 --- Simplified Cosine Transform --- p.3-10Chapter 3.3.1 --- Development of the SCT --- p.3-11Chapter 3.4 --- Fast Computational Algorithms --- p.3-14Chapter 3.4.1 --- Weighted Cosine Transform --- p.3-14Chapter 3.4.2 --- Simplified Cosine Transform --- p.3-18Chapter 3.4.3 --- Computational Requirement --- p.3-19Chapter 3.5 --- Performance Evaluation --- p.3-21Chapter 3.5.1 --- Evaluation using Statistical Model --- p.3-21Chapter 3.5.2 --- Evaluation using Real Images --- p.3-28Chapter 3.6 --- Concluding Remarks --- p.3-31Chapter 3.7 --- Note on Publications --- p.3-32Chapter CHAPTER 4 --- Pruning in Transform Coding of ImagesChapter 4.1 --- Introduction --- p.4-1Chapter 4.2 --- "Direct Fast Algorithms for DCT, WCT and SCT" --- p.4-3Chapter 4.2.1 --- Discrete Cosine Transform --- p.4-3Chapter 4.2.2 --- Weighted Cosine Transform --- p.4-7Chapter 4.2.3 --- Simplified Cosine Transform --- p.4-9Chapter 4.3 --- Pruning in Direct Fast Algorithms --- p.4-10Chapter 4.3.1 --- Discrete Cosine Transform --- p.4-10Chapter 4.3.2 --- Weighted Cosine Transform --- p.4-13Chapter 4.3.3 --- Simplified Cosine Transform --- p.4-15Chapter 4.4 --- Operations Saved by Using Pruning --- p.4-17Chapter 4.4.1 --- Discrete Cosine Transform --- p.4-17Chapter 4.4.2 --- Weighted Cosine Transform --- p.4-21Chapter 4.4.3 --- Simplified Cosine Transform --- p.4-23Chapter 4.4.4 --- Generalization Pruning Algorithm for DCT --- p.4-25Chapter 4.5 --- Concluding Remarks --- p.4-26Chapter 4.6 --- Note on Publications --- p.4-27Chapter CHAPTER 5 --- Efficient Encoding of DC Coefficient in Transform Coding SystemsChapter 5.1 --- Introduction --- p.5-1Chapter 5.2 --- Minimum Edge Difference (MED) Predictor --- p.5-3Chapter 5.3 --- Performance Evaluation --- p.5-6Chapter 5.4 --- Simulation Results --- p.5-9Chapter 5.5 --- Concluding Remarks --- p.5-14Chapter 5.6 --- Note on Publications --- p.5-14Chapter CHAPTER 6 --- Efficient Encoding Algorithms for Vector Quantization of ImagesChapter 6.1 --- Introduction --- p.6-1Chapter 6.2 --- Sub-Codebook Searching Algorithm (SCS) --- p.6-4Chapter 6.2.1 --- Formation of the Sub-codebook --- p.6-6Chapter 6.2.2 --- Premature Exit Conditions in the Searching Process --- p.6-8Chapter 6.2.3 --- Sub-Codebook Searching Algorithm --- p.6-11Chapter 6.3 --- Predictive Sub-Codebook Searching Algorithm (PSCS) --- p.6-13Chapter 6.4 --- Simulation Results --- p.6-17Chapter 6.5 --- Concluding Remarks --- p.5-20Chapter 6.6 --- Note on Publications --- p.6-21Chapter CHAPTER 7 --- Predictive Classified Address Vector Quantization of ImagesChapter 7.1 --- Introduction --- p.7-1Chapter 7.2 --- Optimal Three-Level Block Truncation Coding --- p.7-3Chapter 7.3 --- Predictive Classified Address Vector Quantization --- p.7-5Chapter 7.3.1 --- Classification of Images using Three-level BTC --- p.7-6Chapter 7.3.2 --- Predictive Mean Removal Technique --- p.7-8Chapter 7.3.3 --- Simplified Address VQ Technique --- p.7-9Chapter 7.3.4 --- Encoding Process of PCAVQ --- p.7-13Chapter 7.4 --- Simulation Results --- p.7-14Chapter 7.5 --- Concluding Remarks --- p.7-18Chapter 7.6 --- Note on Publications --- p.7-18Chapter CHAPTER 8 --- Recapitulation and Topics for Future InvestigationChapter 8.1 --- Recapitulation --- p.8-1Chapter 8.2 --- Topics for Future Investigation --- p.8-3REFERENCES --- p.R-1APPENDICESChapter A. --- Statistics of Monochrome Test Images --- p.A-lChapter B. --- Statistics of Color Test Images --- p.A-2Chapter C. --- Fortran Program Listing for the Pruned Fast DCT Algorithm --- p.A-3Chapter D. --- Training Set Images for Building the Codebook of Standard VQ Scheme --- p.A-5Chapter E. --- List of Publications --- p.A-

    Deep Pipeline Architecture for Fast Fractal Color Image Compression Utilizing Inter-Color Correlation

    Get PDF
    Fractal compression technique is a well-known technique that encodes an image by mapping the image into itself and this requires performing a massive and repetitive search. Thus, the encoding time is too long, which is the main problem of the fractal algorithm. To reduce the encoding time, several hardware implementations have been developed. However, they are generally developed for grayscale images, and using them to encode colour images leads to doubling the encoding time 3× at least. Therefore, in this paper, new high-speed hardware architecture is proposed for encoding RGB images in a short time. Unlike the conventional approach of encoding the colour components similarly and individually as a grayscale image, the proposed method encodes two of the colour components by mapping them directly to the most correlated component with a searchless encoding scheme, while the third component is encoded with a search-based scheme. This results in reducing the encoding time and also in increasing the compression rate. The parallel and deep-pipelining approaches have been utilized to improve the processing time significantly. Furthermore, to reduce the memory access to the half, the image is partitioned in such a way that half of the matching operations utilize the same data fetched for processing the other half of the matching operations. Consequently, the proposed architecture can encode a 1024×1024 RGB image within a minimal time of 12.2 ms, and a compression ratio of 46.5. Accordingly, the proposed architecture is further superior to the state-of-the-art architectures.©2022 The Authors. Published by IEEE. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/fi=vertaisarvioitu|en=peerReviewed

    Projection based edge recovery in low bit rate vector quantizers

    Get PDF
    Data compression is probably the single most important factor in every information service that is being visualized and proposed by engineers. The effectiveness of such services are dependent upon achievable compression of real time speech and video signals. Several approaches to signal encoding have been proposed and realized, each with its unique advantages and costs. Large compression ratios can only be achieved through lossy source encoding methods. One such method is Vector Quantization (VQ);The lossy nature of such encoders imply that the process of encoding is non invertible. At low bit rates, lossy compression with conventional decoders (realized as a simple \u27inverse\u27 of the encoder) result in huge subjective and objective distortions. Therefore, the thrust of research is to build \u27intelligent\u27 decoders that use a priori knowledge of the human visual properties in the decoding process. In such a scenario, signal decoding poses itself as a recovery problem based on some known a priori information. This is a study of the use of image recovery methods in lossy image encoding. Specifically, it is a study of the problems and costs associated with low bit rate coding of photographic grayscale images and recovery approaches to alleviate those problems;This study investigates the possibility of applying the theory of Convex Projections (CP) to problems in image recovery. The lossy compression method used as a target is the standard Vector Quantization (VQ) approach. In particular, the study looks at an implementation of VQ with a single-codebook encoding and multiple-codebook decoding. The method uses a convex projections (CP) based algorithm to iteratively project a coarsely encoded image onto a better codebook(s) during decoding, based on certain a priori constraints. The objective of this approach is to make encoding independent of edge regions of an image. This drastically reduces the number of edge vector representations at the encoder and hence results in fast searches. Such an approach will also work better on images outside the training sets since encoding is less dependent on edges

    Digital image compression.

    Get PDF
    Due to the rapid growth in information handling and transmission, there is a serious demand for more efficient data compression schemes. compression schemes address themselves to speech, visual and alphanumeric coded data. This thesis is concerned with the compression of visual data given in the form of still or moving pictures. such data is highly correlated spatially and in the context domain. A detailed study of some existing data compression systems is presented, in particular, the performance of DPCM was analysed by computer simulation, and the results examined both subjectively and objectively. The adaptive form of the prediction encoder is discussed and two new algorithms proposed, which increase the definition of the compressed image and reduce the overall mean square error. Two novel systems are proposed for image compression. The first is a bit plane image coding system based on a hierarchic quadtree structure in a transmission domain, using the Hadamard transform as a kernel. Good compression has been achieved from this scheme, particularly for images with low detail. The second scheme uses a learning automata to predict the probability distribution of the grey levels of an image related to its spatial context and position. An optimal reward/punishment function is proposed such that the automata converges to its steady state within 4000 iterations • such a high speed of convergence together with Huffman coding results in efficient compression for images and is shown to be applicable to other types of data. . The performance and evaluation of all the proposed .'systems have been tested by computer simulation and the results presented both quantitatively and qualitatively."The advantages and disadvantages of each system are discussed and suggestions for improvement. given

    Network driven motion estimation for wireless video terminals

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1997.Includes bibliographical references (p. 101-102).by Wendi Beth Rabiner.M.S
    • …
    corecore