7,098 research outputs found
Model for Estimation of Bounds in Digital Coding of Seabed Images
This paper proposes the novel model for estimation of bounds in digital coding of images. Entropy coding of images is exploited to measure the useful information content of the data. The bit rate achieved by reversible compression using the rate-distortion theory approach takes into account the contribution of the observation noise and the intrinsic information of hypothetical noise-free image. Assuming the Laplacian probability density function of the quantizer input signal, SQNR gains are calculated for image predictive coding system with non-adaptive quantizer for white and correlated noise, respectively. The proposed model is evaluated on seabed images. However, model presented in this paper can be applied to any signal with Laplacian distribution
Quality Modeling Under A Relaxed Natural Scene Statistics Model
Information-theoretic image quality assessment (IQA) models such as Visual
Information Fidelity (VIF) and Spatio-temporal Reduced Reference Entropic
Differences (ST-RRED) have enjoyed great success by seamlessly integrating
natural scene statistics (NSS) with information theory. The Gaussian Scale
Mixture (GSM) model that governs the wavelet subband coefficients of natural
images forms the foundation for these algorithms. However, the explosion of
user-generated content on social media, which is typically distorted by one or
more of many possible unknown impairments, has revealed the limitations of
NSS-based IQA models that rely on the simple GSM model. Here, we seek to
elaborate the VIF index by deriving useful properties of the Multivariate
Generalized Gaussian Distribution (MGGD), and using them to study the behavior
of VIF under a Generalized GSM (GGSM) model
Towards a full-reference, information-theoretic quality assessment method for X-ray images
This work aims at defining an information-theoretic quality assessment technique for cardiovascular X-ray images, using a full-reference scheme (relying on averaging a sequence to obtain a noiseless reference). With the growth of advanced signal processing in medical imaging, such an approach will enable objective comparisons of the quality of processed images. A concept for describing the quality of an image is to express it in terms of its information capacity. Shannon has derived this capacity for noisy channel coding. However, for X-ray images, the noise is signal-dependent and non-additive, so that Shannon's theorem is not directly applicable. To overcome this complication, we exploit the fact that any invertible mapping on a signal does not change its information content. We show that it is possible to transform the images in such a way that the Shannon theorem can be applied. A general method for calculating such a transformation is used, given a known relation between signal mean and noise standard deviation. After making the noise signal-independent, it is possible to assess the information content of an image and to calculate an overall quality metric (e.g. information capacity) which includes the effects of sharpness, contrast and noise. We have applied this method on phantom images under different acquisition conditions and computed the information capacity for those images. We aim to show that the results of this assessment are consistent with variations in noise, contrast and sharpness, introduced by system settings and image processing
A Detail Based Method for Linear Full Reference Image Quality Prediction
In this paper, a novel Full Reference method is proposed for image quality
assessment, using the combination of two separate metrics to measure the
perceptually distinct impact of detail losses and of spurious details. To this
purpose, the gradient of the impaired image is locally decomposed as a
predicted version of the original gradient, plus a gradient residual. It is
assumed that the detail attenuation identifies the detail loss, whereas the
gradient residuals describe the spurious details. It turns out that the
perceptual impact of detail losses is roughly linear with the loss of the
positional Fisher information, while the perceptual impact of the spurious
details is roughly proportional to a logarithmic measure of the signal to
residual ratio. The affine combination of these two metrics forms a new index
strongly correlated with the empirical Differential Mean Opinion Score (DMOS)
for a significant class of image impairments, as verified for three independent
popular databases. The method allowed alignment and merging of DMOS data coming
from these different databases to a common DMOS scale by affine
transformations. Unexpectedly, the DMOS scale setting is possible by the
analysis of a single image affected by additive noise.Comment: 15 pages, 9 figures. Copyright notice: The paper has been accepted
for publication on the IEEE Trans. on Image Processing on 19/09/2017 and the
copyright has been transferred to the IEE
Processing techniques development
There are no author-identified significant results in this report
Predicting blur visual discomfort for natural scenes by the loss of positional information
The perception of blur due to accommodation failures, insufficient optical correction or imperfect image reproduction is a common source of visual discomfort, usually attributed to an anomalous and annoying distribution of the image spectrum in the spatial frequency domain. In the present paper, this discomfort is related to a loss of the localization accuracy of the observed patterns. It is assumed, as a starting perceptual principle, that the visual system is optimally adapted to pattern localization in a natural environment. Thus, since the best possible accuracy of the image patterns localization is indicated by the positional Fisher Information, it is argued that blur discomfort is strictly related to a loss of this information. Following this concept, a receptive field functional model is adopted to predict the visual discomfort. It is a complex-valued operator, orientation-selective both in the space domain and in the spatial frequency domain. Starting from the case of Gaussian blur, the analysis is extended to a generic type of blur by applying a positional Fisher Information equivalence criterion. Out-of-focus blur and astigmatic blur are presented as significant examples. The validity of the proposed model is verified by comparing its predictions with subjective ratings. The model fits linearly with the experiments reported in independent databases, based on different protocols and settings
- …