1,526 research outputs found
A Novel Rate Control Algorithm for Onboard Predictive Coding of Multispectral and Hyperspectral Images
Predictive coding is attractive for compression onboard of spacecrafts thanks
to its low computational complexity, modest memory requirements and the ability
to accurately control quality on a pixel-by-pixel basis. Traditionally,
predictive compression focused on the lossless and near-lossless modes of
operation where the maximum error can be bounded but the rate of the compressed
image is variable. Rate control is considered a challenging problem for
predictive encoders due to the dependencies between quantization and prediction
in the feedback loop, and the lack of a signal representation that packs the
signal's energy into few coefficients. In this paper, we show that it is
possible to design a rate control scheme intended for onboard implementation.
In particular, we propose a general framework to select quantizers in each
spatial and spectral region of an image so as to achieve the desired target
rate while minimizing distortion. The rate control algorithm allows to achieve
lossy, near-lossless compression, and any in-between type of compression, e.g.,
lossy compression with a near-lossless constraint. While this framework is
independent of the specific predictor used, in order to show its performance,
in this paper we tailor it to the predictor adopted by the CCSDS-123 lossless
compression standard, obtaining an extension that allows to perform lossless,
near-lossless and lossy compression in a single package. We show that the rate
controller has excellent performance in terms of accuracy in the output rate,
rate-distortion characteristics and is extremely competitive with respect to
state-of-the-art transform coding
Data compression techniques applied to high resolution high frame rate video technology
An investigation is presented of video data compression applied to microgravity space experiments using High Resolution High Frame Rate Video Technology (HHVT). An extensive survey of methods of video data compression, described in the open literature, was conducted. The survey examines compression methods employing digital computing. The results of the survey are presented. They include a description of each method and assessment of image degradation and video data parameters. An assessment is made of present and near term future technology for implementation of video data compression in high speed imaging system. Results of the assessment are discussed and summarized. The results of a study of a baseline HHVT video system, and approaches for implementation of video data compression, are presented. Case studies of three microgravity experiments are presented and specific compression techniques and implementations are recommended
Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation
The photo-response non-uniformity (PRNU) is a distinctive image sensor
characteristic, and an imaging device inadvertently introduces its sensor's
PRNU into all media it captures. Therefore, the PRNU can be regarded as a
camera fingerprint and used for source attribution. The imaging pipeline in a
camera, however, involves various processing steps that are detrimental to PRNU
estimation. In the context of photographic images, these challenges are
successfully addressed and the method for estimating a sensor's PRNU pattern is
well established. However, various additional challenges related to generation
of videos remain largely untackled. With this perspective, this work introduces
methods to mitigate disruptive effects of widely deployed H.264 and H.265 video
compression standards on PRNU estimation. Our approach involves an intervention
in the decoding process to eliminate a filtering procedure applied at the
decoder to reduce blockiness. It also utilizes decoding parameters to develop a
weighting scheme and adjust the contribution of video frames at the macroblock
level to PRNU estimation process. Results obtained on videos captured by 28
cameras show that our approach increases the PRNU matching metric up to more
than five times over the conventional estimation method tailored for photos
Removal Of Blocking Artifacts From JPEG-Compressed Images Using An Adaptive Filtering Algorithm
The aim of this research was to develop an algorithm that will produce a considerable improvement in the quality of JPEG images, by removing blocking and ringing artifacts, irrespective of the level of compression present in the image. We review multiple published related works, and finally present a computationally efficient algorithm for reducing the blocky and Gibbs oscillation artifacts commonly present in JPEG compressed images. The algorithm alpha-blends a smoothed version of the image with the original image; however, the blending is controlled by a limit factor that considers the amount of compression present and any local edge information derived from the application of a Prewitt filter. In addition, the actual value of the blending coefficient (α) is derived from the local Mean Structural Similarity Index Measure (MSSIM) which is also adjusted by a factor that also considers the amount of compression present. We also present our results as well as the results for a variety of other papers whose authors used other post compression filtering methods
Lossy Compression of Remote Sensing Images with Controllable Distortions
In this chapter, approaches to provide a desired quality of remote sensing images compressed in a lossy manner are considered. It is shown that, under certain conditions, this can be done automatically and quickly using prediction of coder performance parameters. The main parameters (metrics) are mean square error (MSE) or peak signal-to-noise ratio (PSNR) of introduced losses (distortions) although prediction of other important metrics is also possible. Having such a prediction, it becomes possible to set a quantization step of a coder in a proper manner to provide distortions of a desired level or less without compression/decompression iterations for single-channel image. It is shown that this approach can be also exploited in three-dimensional (3D) compression of multichannel images to produce a larger compression ratio (CR) for the same or less introduced distortions as for component-wise compression of multichannel data. The proposed methods are verified for test and real life images
Removal Of Blocking Artifacts From JPEG-Compressed Images Using Neural Network
The goal of this research was to develop a neural network that will produce considerable improvement in the quality of JPEG compressed images, irrespective of compression level present in the images. In order to develop a computationally efficient algorithm for reducing blocky and Gibbs oscillation artifacts from JPEG compressed images, we integrated artificial intelligence to remove blocky and Gibbs oscillation artifacts. In this approach, alpha blend filter [7] was used to post process JPEG compressed images to reduce noise and artifacts without losing image details. Here alpha blending was controlled by a limit factor that considers the amount of compression present, and any local information derived from Prewitt filter application in the input JPEG image. The outcome of modified alpha blend was improved by a trained neural network and compared with various other published works [7][9][11][14][20][23][30][32][33][35][37] where authors used post compression filtering methods
- …