6,643 research outputs found
Implementation issues in source coding
An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated
Data compression techniques applied to high resolution high frame rate video technology
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended
Motion-Compensated Coding and Frame-Rate Up-Conversion: Models and Analysis
Block-based motion estimation (ME) and compensation (MC) techniques are
widely used in modern video processing algorithms and compression systems. The
great variety of video applications and devices results in numerous compression
specifications. Specifically, there is a diversity of frame-rates and
bit-rates. In this paper, we study the effect of frame-rate and compression
bit-rate on block-based ME and MC as commonly utilized in inter-frame coding
and frame-rate up conversion (FRUC). This joint examination yields a
comprehensive foundation for comparing MC procedures in coding and FRUC. First,
the video signal is modeled as a noisy translational motion of an image. Then,
we theoretically model the motion-compensated prediction of an available and
absent frames as in coding and FRUC applications, respectively. The theoretic
MC-prediction error is further analyzed and its autocorrelation function is
calculated for coding and FRUC applications. We show a linear relation between
the variance of the MC-prediction error and temporal-distance. While the
affecting distance in MC-coding is between the predicted and reference frames,
MC-FRUC is affected by the distance between the available frames used for the
interpolation. Moreover, the dependency in temporal-distance implies an inverse
effect of the frame-rate. FRUC performance analysis considers the prediction
error variance, since it equals to the mean-squared-error of the interpolation.
However, MC-coding analysis requires the entire autocorrelation function of the
error; hence, analytic simplicity is beneficial. Therefore, we propose two
constructions of a separable autocorrelation function for prediction error in
MC-coding. We conclude by comparing our estimations with experimental results
Weighted universal image compression
We describe a general coding strategy leading to a family of universal image compression systems designed to give good performance in applications where the statistics of the source to be compressed are not available at design time or vary over time or space. The basic approach considered uses a two-stage structure in which the single source code of traditional image compression systems is replaced with a family of codes designed to cover a large class of possible sources. To illustrate this approach, we consider the optimal design and use of two-stage codes containing collections of vector quantizers (weighted universal vector quantization), bit allocations for JPEG-style coding (weighted universal bit allocation), and transform codes (weighted universal transform coding). Further, we demonstrate the benefits to be gained from the inclusion of perceptual distortion measures and optimal parsing. The strategy yields two-stage codes that significantly outperform their single-stage predecessors. On a sequence of medical images, weighted universal vector quantization outperforms entropy coded vector quantization by over 9 dB. On the same data sequence, weighted universal bit allocation outperforms a JPEG-style code by over 2.5 dB. On a collection of mixed test and image data, weighted universal transform coding outperforms a single, data-optimized transform code (which gives performance almost identical to that of JPEG) by over 6 dB
Recommended from our members
Evaluation of diversity gains for DVB-T systems
Copyright @ 2008 Eurescom GmbHThe requirements for future DVB-T/H
networks demand that broadcasters design and deploy networks that provide ubiquitous reception in challenging indoors and other obstructed situations. It is essential that such networks are designed costeffectively
and with minimized environmental impact.
The EC funded project PLUTO has since its start in 2006 explored the use of diversity to improve coverage in these difficult situations. The purpose of this paper is to investigate the performance of transmit and
receive diversity gains with two antennas to improve the reception of DVB-T/H systems operating in different realistic propagation conditions through a series of tests using the Spirent dual channel emulator
Graph Spectral Image Processing
Recent advent of graph signal processing (GSP) has spurred intensive studies
of signals that live naturally on irregular data kernels described by graphs
(e.g., social networks, wireless sensor networks). Though a digital image
contains pixels that reside on a regularly sampled 2D grid, if one can design
an appropriate underlying graph connecting pixels with weights that reflect the
image structure, then one can interpret the image (or image patch) as a signal
on a graph, and apply GSP tools for processing and analysis of the signal in
graph spectral domain. In this article, we overview recent graph spectral
techniques in GSP specifically for image / video processing. The topics covered
include image compression, image restoration, image filtering and image
segmentation
- …