560 research outputs found
On color image quality assessment using natural image statistics
Color distortion can introduce a significant damage in visual quality
perception, however, most of existing reduced-reference quality measures are
designed for grayscale images. In this paper, we consider a basic extension of
well-known image-statistics based quality assessment measures to color images.
In order to evaluate the impact of color information on the measures
efficiency, two color spaces are investigated: RGB and CIELAB. Results of an
extensive evaluation using TID 2013 benchmark demonstrates that significant
improvement can be achieved for a great number of distortion type when the
CIELAB color representation is used
Virtually Lossless Compression of Astrophysical Images
We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM) scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.). The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community
Image Coding based Orthogonal Polynomials Multiresolution Analysis with Joint Probability Context Modeling and Modified Golomb-Rice Entropy Coding
The work proposes, a JPEG2000 like compression technique which is based on multiresolution analysis of orthogonal polynomials transformation (OPT) coefficients has been presented with bit modeling for Golomb-Rice entropy coding. Initially, the image under analysis is divided into blocks and OPT is applied to each divided blocks. Then, transformed coefficients are represented as sub bands like structure (multiresolution) and scalar quantization is carried out to the transformed coefficients to reduce the precision. The quantized coefficients are then bit modelled in the bit plane using a joint probability statistical model, and significant bits in the bit plane are chosen. On the selected relevant bits, a geometrically distributed set of context is modelled for further encoding with modified Golomb-Rice encoding to provide compressed data. The decompression procedure is just the reverse of compression procedure. Experiments and analysis are carried out to demonstrate the efficiency of the proposed compression scheme in terms of compression ratio and Peak-Signal-to Noise Ratio (PSNR), and the results are encouragin
Hybrid Region-based Image Compression Scheme for Mamograms and Ultrasound Images
The need for transmission and archive of mammograms and ultrasound Images has
dramatically increased in tele-healthcare applications. Such images require large
amount of' storage space which affect transmission speed. Therefore an effective
compression scheme is essential. Compression of these images. in general. laces a
great challenge to compromise between the higher compression ratio and the relevant
diagnostic information. Out of the many studied compression schemes. lossless
.
IPl. (i-
LS and lossy SPII IT are found to he the most efficient ones. JPEG-LS and SI'll IT are
chosen based on a comprehensive experimental study carried on a large number of
mammograms and ultrasound images of different sizes and texture. The lossless
schemes are evaluated based on the compression ratio and compression speed. The
distortion in the image quality which is introduced by lossy methods evaluated based
on objective criteria using Mean Square Error (MSE) and Peak signal to Noise Ratio
(PSNR). It is found that lossless compression can achieve a modest compression ratio
2: 1 - 4: 1. bossy compression schemes can achieve higher compression ratios than
lossless ones but at the price of the image quality which may impede diagnostic
conclusions. In this work, a new compression approach called Ilvbrid Region-based Image
Compression Scheme (IIYRICS) has been proposed for the mammograms and
ultrasound images to achieve higher compression ratios without compromising the
diagnostic quality. In I LYRICS, a modification for JPI; G-LS is introduced to encode
the arbitrary shaped disease affected regions. Then Shape adaptive SPIT IT is applied
on the remaining non region of interest. The results clearly show that this hybrid
strategy can yield high compression ratios with perfect reconstruction of diagnostic
relevant regions, achieving high speed transmission and less storage requirement. For
the sample images considered in our experiment, the compression ratio increases
approximately ten times. However, this increase depends upon the size of the region
of interest chosen. It is also föund that the pre-processing (contrast stretching) of
region of interest improves compression ratios on mammograms but not on ultrasound
images
A bag of words description scheme for image quality assessment
Every day millions of images are obtained, processed, compressed, saved, transmitted and reproduced.
All these operations can cause distortions that affect their quality. The quality of
these images should be measured subjectively. However, that brings the disadvantage of achieving
a considerable number of tests with individuals requested to provide a statistical analysis of
an image’s perceptual quality. Several objective metrics have been developed, that try to model
the human perception of quality. However, in most applications the representation of human
quality perception given by these metrics is far from the desired representation. Therefore,
this work proposes the usage of machine learning models that allow for a better approximation.
In this work, definitions for image and quality are given and some of the difficulties of the study
of image quality are mentioned. Moreover, three metrics are initially explained. One uses the
image’s original quality has a reference (SSIM) while the other two are no reference (BRISQUE
and QAC). A comparison is made, showing a large discrepancy of values between the two kinds
of metrics.
The database that is used for the tests is TID2013. This database was chosen due to its dimension
and by the fact of considering a large number of distortions. A study of each type of distortion
in this database is made.
Furthermore, some concepts of machine learning are introduced along with algorithms relevant
in the context of this dissertation, notably, K-means, KNN and SVM. Description aggregator
algorithms like “bag of words” and “fisher-vectors” are also mentioned.
This dissertation studies a new model that combines machine learning and a quality metric for
quality estimation. This model is based on the division of images in cells, where a specific
metric is computed. With this division, it is possible to obtain local quality descriptors that will
be aggregated using “bag of words”. A SVM with an RBF kernel is trained and tested on the same
database and the results of the model are evaluated using cross-validation.
The results are analysed using Pearson, Spearman and Kendall correlations and the RMSE to
evaluate the representation of the model when compared with the subjective results. The
model improves the results of the metric that was used and shows a new path to apply machine
learning for quality evaluation.No nosso dia-a-dia as imagens sĂŁo obtidas, processadas, comprimidas, guardadas, transmitidas
e reproduzidas. Em qualquer destas operações podem ocorrer distorções que prejudicam a sua
qualidade. A qualidade destas imagens pode ser medida de forma subjectiva, o que tem a
desvantagem de serem necessários vários testes, a um nĂşmero considerável de indivĂduos para
ser feita uma análise estatĂstica da qualidade perceptual de uma imagem. Foram desenvolvidas
várias métricas objectivas, que de alguma forma tentam modelar a percepção humana de
qualidade. Todavia, em muitas aplicações a representação de percepção de qualidade humana
dada por estas métricas fica aquém do desejável, razão porque se propõe neste trabalho usar
modelos de reconhecimento de padrões que permitam uma maior aproximação.
Neste trabalho, são dadas definições para imagem e qualidade e algumas das dificuldades do
estudo da qualidade de imagem são referidas. É referida a importância da qualidade de imagem
como ramo de estudo, e são estudadas diversas métricas de qualidade.
São explicadas três métricas, uma delas que usa a qualidade original como referência (SSIM) e
duas métricas sem referência (BRISQUE e QAC). Uma comparação é feita entre elas, mostrando-
– se uma grande discrepância de valores entre os dois tipos de métricas.
Para os testes feitos Ă© usada a base de dados TID2013, que Ă© muitas vezes considerada para
estudos de qualidade de métricas devido à sua dimensão e ao facto de considerar um grande
número de distorções. Neste trabalho também se fez um estudo dos tipos de distorção incluidos
nesta base de dados e como Ă© que eles sĂŁo simulados.
São introduzidos também alguns conceitos teóricos de reconhecimento de padrões e alguns
algoritmos relevantes no contexto da dissertação, são descritos como o K-means, KNN e as
SVMs. Algoritmos de agregação de descritores como o “bag of words” e o “fisher-vectors”
também são referidos.
Esta dissertação adiciona métodos de reconhecimento de padrões a métricas objectivas de qua–
lidade de imagem. Uma nova técnica é proposta, baseada na divisão de imagens em células, nas
quais uma métrica será calculada. Esta divisão permite obter descritores locais de qualidade
que serão agregados usando “bag of words”. Uma SVM com kernel RBF é treinada e testada na
mesma base de dados e os resultados do modelo sĂŁo mostrados usando cross-validation.
Os resultados são analisados usando as correlações de Pearson, Spearman e Kendall e o RMSE
que permitem avaliar a proximidade entre a métrica desenvolvida e os resultados subjectivos.
Este modelo melhora os resultados obtidos com a métrica usada e demonstra uma nova forma
de aplicar modelos de reconhecimento de padrões ao estudo de avaliação de qualidade
- …