859 research outputs found
Identification and Lossy Reconstruction in Noisy Databases
A high-dimensional database system is studied where the noisy versions of the underlying feature vectors are observed in both the enrollment and query phases. The noisy observations are compressed before being stored in the database, and the user wishes to both identify the correct entry corresponding to the noisy query vector and reconstruct the original feature vector within a desired distortion level. A fundamental capacity-storage-distortion tradeoff is identified for this system in the form of single-letter information theoretic expressions. The relation of this problem to the classical Wyner-Ziv rate-distortion problem is shown, where the noisy query vector acts as the correlated side information available only in the lossy reconstruction of the feature vector. \ua9 1963-2012 IEEE
Learning to compress and search visual data in large-scale systems
The problem of high-dimensional and large-scale representation of visual data
is addressed from an unsupervised learning perspective. The emphasis is put on
discrete representations, where the description length can be measured in bits
and hence the model capacity can be controlled. The algorithmic infrastructure
is developed based on the synthesis and analysis prior models whose
rate-distortion properties, as well as capacity vs. sample complexity
trade-offs are carefully optimized. These models are then extended to
multi-layers, namely the RRQ and the ML-STC frameworks, where the latter is
further evolved as a powerful deep neural network architecture with fast and
sample-efficient training and discrete representations. For the developed
algorithms, three important applications are developed. First, the problem of
large-scale similarity search in retrieval systems is addressed, where a
double-stage solution is proposed leading to faster query times and shorter
database storage. Second, the problem of learned image compression is targeted,
where the proposed models can capture more redundancies from the training
images than the conventional compression codecs. Finally, the proposed
algorithms are used to solve ill-posed inverse problems. In particular, the
problems of image denoising and compressive sensing are addressed with
promising results.Comment: PhD thesis dissertatio
A bag of words description scheme for image quality assessment
Every day millions of images are obtained, processed, compressed, saved, transmitted and reproduced.
All these operations can cause distortions that affect their quality. The quality of
these images should be measured subjectively. However, that brings the disadvantage of achieving
a considerable number of tests with individuals requested to provide a statistical analysis of
an image’s perceptual quality. Several objective metrics have been developed, that try to model
the human perception of quality. However, in most applications the representation of human
quality perception given by these metrics is far from the desired representation. Therefore,
this work proposes the usage of machine learning models that allow for a better approximation.
In this work, definitions for image and quality are given and some of the difficulties of the study
of image quality are mentioned. Moreover, three metrics are initially explained. One uses the
image’s original quality has a reference (SSIM) while the other two are no reference (BRISQUE
and QAC). A comparison is made, showing a large discrepancy of values between the two kinds
of metrics.
The database that is used for the tests is TID2013. This database was chosen due to its dimension
and by the fact of considering a large number of distortions. A study of each type of distortion
in this database is made.
Furthermore, some concepts of machine learning are introduced along with algorithms relevant
in the context of this dissertation, notably, K-means, KNN and SVM. Description aggregator
algorithms like “bag of words” and “fisher-vectors” are also mentioned.
This dissertation studies a new model that combines machine learning and a quality metric for
quality estimation. This model is based on the division of images in cells, where a specific
metric is computed. With this division, it is possible to obtain local quality descriptors that will
be aggregated using “bag of words”. A SVM with an RBF kernel is trained and tested on the same
database and the results of the model are evaluated using cross-validation.
The results are analysed using Pearson, Spearman and Kendall correlations and the RMSE to
evaluate the representation of the model when compared with the subjective results. The
model improves the results of the metric that was used and shows a new path to apply machine
learning for quality evaluation.No nosso dia-a-dia as imagens sĂŁo obtidas, processadas, comprimidas, guardadas, transmitidas
e reproduzidas. Em qualquer destas operações podem ocorrer distorções que prejudicam a sua
qualidade. A qualidade destas imagens pode ser medida de forma subjectiva, o que tem a
desvantagem de serem necessários vários testes, a um nĂşmero considerável de indivĂduos para
ser feita uma análise estatĂstica da qualidade perceptual de uma imagem. Foram desenvolvidas
várias métricas objectivas, que de alguma forma tentam modelar a percepção humana de
qualidade. Todavia, em muitas aplicações a representação de percepção de qualidade humana
dada por estas métricas fica aquém do desejável, razão porque se propõe neste trabalho usar
modelos de reconhecimento de padrões que permitam uma maior aproximação.
Neste trabalho, são dadas definições para imagem e qualidade e algumas das dificuldades do
estudo da qualidade de imagem são referidas. É referida a importância da qualidade de imagem
como ramo de estudo, e são estudadas diversas métricas de qualidade.
São explicadas três métricas, uma delas que usa a qualidade original como referência (SSIM) e
duas métricas sem referência (BRISQUE e QAC). Uma comparação é feita entre elas, mostrando-
– se uma grande discrepância de valores entre os dois tipos de métricas.
Para os testes feitos Ă© usada a base de dados TID2013, que Ă© muitas vezes considerada para
estudos de qualidade de métricas devido à sua dimensão e ao facto de considerar um grande
número de distorções. Neste trabalho também se fez um estudo dos tipos de distorção incluidos
nesta base de dados e como Ă© que eles sĂŁo simulados.
São introduzidos também alguns conceitos teóricos de reconhecimento de padrões e alguns
algoritmos relevantes no contexto da dissertação, são descritos como o K-means, KNN e as
SVMs. Algoritmos de agregação de descritores como o “bag of words” e o “fisher-vectors”
também são referidos.
Esta dissertação adiciona métodos de reconhecimento de padrões a métricas objectivas de qua–
lidade de imagem. Uma nova técnica é proposta, baseada na divisão de imagens em células, nas
quais uma métrica será calculada. Esta divisão permite obter descritores locais de qualidade
que serão agregados usando “bag of words”. Uma SVM com kernel RBF é treinada e testada na
mesma base de dados e os resultados do modelo sĂŁo mostrados usando cross-validation.
Os resultados são analisados usando as correlações de Pearson, Spearman e Kendall e o RMSE
que permitem avaliar a proximidade entre a métrica desenvolvida e os resultados subjectivos.
Este modelo melhora os resultados obtidos com a métrica usada e demonstra uma nova forma
de aplicar modelos de reconhecimento de padrões ao estudo de avaliação de qualidade
- …