640 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    Digital image compression

    Get PDF

    Scalable video compression

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 85-88).by Joseph Bruce Stampleman.M.S

    Shuttle/TDRSS Ku-band downlink study

    Get PDF
    Assessing the adequacy of the baseline signal design approach, developing performance specifications for the return link hardware, and performing detailed design and parameter optimization tasks was accomplished by completing five specific study tasks. The results of these tasks show that the basic signal structure design is sound and that the goals can be met. Constraints placed on return link hardware by this structure allow reasonable specifications to be written so that no extreme technical risk areas in equipment design are foreseen. A third channel can be added to the PM mode without seriously degrading the other services. The feasibility of using only a PM mode was shown to exist, however, this will require use of some digital TV transmission techniques. Each task and its results are summarized

    High-performance compression of visual information - A tutorial review - Part I : Still Pictures

    Get PDF
    Digital images have become an important source of information in the modern world of communication systems. In their raw form, digital images require a tremendous amount of memory. Many research efforts have been devoted to the problem of image compression in the last two decades. Two different compression categories must be distinguished: lossless and lossy. Lossless compression is achieved if no distortion is introduced in the coded image. Applications requiring this type of compression include medical imaging and satellite photography. For applications such as video telephony or multimedia applications, some loss of information is usually tolerated in exchange for a high compression ratio. In this two-part paper, the major building blocks of image coding schemes are overviewed. Part I covers still image coding, and Part II covers motion picture sequences. In this first part, still image coding schemes have been classified into predictive, block transform, and multiresolution approaches. Predictive methods are suited to lossless and low-compression applications. Transform-based coding schemes achieve higher compression ratios for lossy compression but suffer from blocking artifacts at high-compression ratios. Multiresolution approaches are suited for lossy as well for lossless compression. At lossy high-compression ratios, the typical artifact visible in the reconstructed images is the ringing effect. New applications in a multimedia environment drove the need for new functionalities of the image coding schemes. For that purpose, second-generation coding techniques segment the image into semantically meaningful parts. Therefore, parts of these methods have been adapted to work for arbitrarily shaped regions. In order to add another functionality, such as progressive transmission of the information, specific quantization algorithms must be defined. A final step in the compression scheme is achieved by the codeword assignment. Finally, coding results are presented which compare stateof- the-art techniques for lossy and lossless compression. The different artifacts of each technique are highlighted and discussed. Also, the possibility of progressive transmission is illustrated

    Renegotiation based dynamic bandwidth allocation for selfsimilar VBR traffic

    Get PDF
    The provision of QoS to applications traffic depends heavily on how different traffic types are categorized and classified, and how the prioritization of these applications are managed. Bandwidth is the most scarce network resource. Therefore, there is a need for a method or system that distributes an available bandwidth in a network among different applications in such a way that each class or type of traffic receives their constraint QoS requirements. In this dissertation, a new renegotiation based dynamic resource allocation method for variable bit rate (VBR) traffic is presented. First, pros and cons of available off-line methods that are used to estimate selfsimilarity level (represented by Hurst parameter) of a VBR traffic trace are empirically investigated, and criteria to select measurement parameters for online resource management are developed. It is shown that wavelet analysis based methods are the strongest tools in estimation of Hurst parameter with their low computational complexities, compared to the variance-time method and R/S pox plot. Therefore, a temporal energy distribution of a traffic data arrival counting process among different frequency sub-bands is considered as a traffic descriptor, and then a robust traffic rate predictor is developed by using the Haar wavelet analysis. The empirical results show that the new on-line dynamic bandwidth allocation scheme for VBR traffic is superior to traditional dynamic bandwidth allocation methods that are based on adaptive algorithms such as Least Mean Square, Recursive Least Square, and Mean Square Error etc. in terms of high utilization and low queuing delay. Also a method is developed to minimize the number of bandwidth renegotiations to decrease signaling costs on traffic schedulers (e.g. WFQ) and networks (e.g. ATM). It is also quantified that the introduced renegotiation based bandwidth management scheme decreases heavytailedness of queue size distributions, which is an inherent impact of traffic self similarity. The new design increases the achieved utilization levels in the literature, provisions given queue size constraints and minimizes the number of renegotiations simultaneously. This renegotiation -based design is online and practically embeddable into QoS management blocks, edge routers and Digital Subscriber Lines Access Multiplexers (DSLAM) and rate adaptive DSL modems

    ERTS direct readout ground station study

    Get PDF
    A system configuration which provides for a wide variety of user requirements is described. Two distinct user types are considered and optimized configurations are provided. Independent satellite transmission systems allow simultaneous signal transmission to Regional Collection Centers via a high data rate channel and to local users who require near real time consumption of lower rate data. In order to maximize the ultimate utility of this study effort, a parametric system description is given such that in essence a shopping list is provided. To achieve these results, it was necessary to consider all technical disciplines associated with high resolution satellite imaging systems including signal processing, modulation and coding, recording, and display techniques. A total systems study was performed

    An Investigation of Orthogonal Wavelet Division Multiplexing Techniques as an Alternative to Orthogonal Frequency Division Multiplex Transmissions and Comparison of Wavelet Families and Their Children

    Get PDF
    Recently, issues surrounding wireless communications have risen to prominence because of the increase in the popularity of wireless applications. Bandwidth problems, and the difficulty of modulating signals across carriers, represent significant challenges. Every modulation scheme used to date has had limitations, and the use of the Discrete Fourier Transform in OFDM (Orthogonal Frequency Division Multiplex) is no exception. The restriction on further development of OFDM lies primarily within the type of transform it uses in the heart of its system, Fourier transform. OFDM suffers from sensitivity to Peak to Average Power Ratio, carrier frequency offset and wasting some bandwidth to guard successive OFDM symbols. The discovery of the wavelet transform has opened up a number of potential applications from image compression to watermarking and encryption. Very recently, work has been done to investigate the potential of using wavelet transforms within the communication space. This research will further investigate a recently proposed, innovative, modulation technique, Orthogonal Wavelet Division Multiplex, which utilises the wavelet transform opening a new avenue for an alternative modulation scheme with some interesting potential characteristics. Wavelet transform has many families and each of those families has children which each differ in filter length. This research consider comprehensively investigates the new modulation scheme, and proposes multi-level dynamic sub-banding as a tool to adapt variable signal bandwidths. Furthermore, all compactly supported wavelet families and their associated children of those families are investigated and evaluated against each other and compared with OFDM. The linear computational complexity of wavelet transform is less than the logarithmic complexity of Fourier in OFDM. The more important complexity is the operational complexity which is cost effectiveness, such as the time response of the system, the memory consumption and the number of iterative operations required for data processing. Those complexities are investigated for all available compactly supported wavelet families and their children and compared with OFDM. The evaluation reveals which wavelet families perform more effectively than OFDM, and for each wavelet family identifies which family children perform the best. Based on these results, it is concluded that the wavelet modulation scheme has some interesting advantages over OFDM, such as lower complexity and bandwidth conservation of up to 25%, due to the elimination of guard intervals and dynamic bandwidth allocation, which result in better cost effectiveness
    corecore