2,800 research outputs found

    Data compression techniques applied to high resolution high frame rate video technology

    Get PDF
    An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended

    Semiannual status report

    Get PDF
    The work performed in the previous six months can be divided into three main cases: (1) transmission of images over local area networks (LAN's); (2) coding of color mapped (pseudo-color) images; and (3) low rate video coding. A brief overview of the work done in the first two areas is presented. The third item is reported in somewhat more detail

    Optimum Implementation of Compound Compression of a Computer Screen for Real-Time Transmission in Low Network Bandwidth Environments

    Get PDF
    Remote working is becoming increasingly more prevalent in recent times. A large part of remote working involves sharing computer screens between servers and clients. The image content that is presented when sharing computer screens consists of both natural camera captured image data as well as computer generated graphics and text. The attributes of natural camera captured image data differ greatly to the attributes of computer generated image data. An image containing a mixture of both natural camera captured image and computer generated image data is known as a compound image. The research presented in this thesis focuses on the challenge of constructing a compound compression strategy to apply the ‘best fit’ compression algorithm for the mixed content found in a compound image. The research also involves analysis and classification of the types of data a given compound image may contain. While researching optimal types of compression, consideration is given to the computational overhead of a given algorithm because the research is being developed for real time systems such as cloud computing services, where latency has a detrimental impact on end user experience. The previous and current state of the art videos codec’s have been researched along many of the most current publishing’s from academia, to design and implement a novel approach to a low complexity compound compression algorithm that will be suitable for real time transmission. The compound compression algorithm will utilise a mixture of lossless and lossy compression algorithms with parameters that can be used to control the performance of the algorithm. An objective image quality assessment is needed to determine whether the proposed algorithm can produce an acceptable quality image after processing. Both traditional metrics such as Peak Signal to Noise Ratio will be used along with a new more modern approach specifically designed for compound images which is known as Structural Similarity Index will be used to define the quality of the decompressed Image. In finishing, the compression strategy will be tested on a set of generated compound images. Using open source software, the same images will be compressed with the previous and current state of the art video codec’s to compare the three main metrics, compression ratio, computational complexity and objective image quality

    Prediction error image coding using a modified stochastic vector quantization scheme

    Get PDF
    The objective of this paper is to provide an efficient and yet simple method to encode the prediction error image of video sequences, based on a stochastic vector quantization (SVQ) approach that has been modified to cope with the intrinsic decorrelated nature of the prediction error image of video signals. In the SVQ scheme, the codewords are generated by stochastic techniques instead of being generated by a training set representative of the expected input image as is normal use in VQ. The performance of the scheme is shown for the particular case of segmentation-based video coding although the technique can be also applied to motion-compensated hybrid coding schemes.Peer ReviewedPostprint (published version

    Approximate trigonometric expansions with applications to signal decomposition and coding

    Get PDF
    Signal representation and data coding for multi-dimensional signals have recently received considerable attention due to their importance to several modern technologies. Many useful contributions have been reported that employ wavelets and transform methods. For signal representation, it is always desired that a signal be represented using minimum number of parameters. The transform efficiency and ease of its implementation are to a large extent mutually incompatible. If a stationary process is not periodic, then the coefficients of its Fourier expansion are not uncorrelated. With the exception of periodic signals the expansion of such a process as a superposition of exponentials, particularly in the study of linear systems, needs no elaboration. In this research, stationary and non-periodic signals are represented using approximate trigonometric expansions. These expansions have a user-defined parameter which can be used for making the transformation a signal decomposition tool. It is shown that fast implementation of these expansions is possible using wavelets. These approximate trigonometric expansions are applied to multidimensional signals in a constrained environment where dominant coefficients of the expansion are retained and insignificant ones are set to zero. The signal is then reconstructed using these limited set of coefficients, thus leading to compression. Sample results for representing multidimensional signals are given to illustrate the efficiency of the proposed method. It is verified that for a given number of coefficients, the proposed technique yields higher signal to noise ratio than conventional techniques employing the discrete cosine transform technique

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Design of a digital compression technique for shuttle television

    Get PDF
    The determination of the performance and hardware complexity of data compression algorithms applicable to color television signals, were studied to assess the feasibility of digital compression techniques for shuttle communications applications. For return link communications, it is shown that a nonadaptive two dimensional DPCM technique compresses the bandwidth of field-sequential color TV to about 13 MBPS and requires less than 60 watts of secondary power. For forward link communications, a facsimile coding technique is recommended which provides high resolution slow scan television on a 144 KBPS channel. The onboard decoder requires about 19 watts of secondary power

    Advanced Television Research Program

    Get PDF
    Contains an introduction and reports on twelve research projects.Advanced Television Research ProgramNational Science Foundation Grant MIP 87-14969National Science Foundation FellowshipKodak Fellowshi
    • …
    corecore