3 research outputs found
Effects of Image Degradations to CNN-based Image Classification
Just like many other topics in computer vision, image classification has
achieved significant progress recently by using deep-learning neural networks,
especially the Convolutional Neural Networks (CNN). Most of the existing works
are focused on classifying very clear natural images, evidenced by the widely
used image databases such as Caltech-256, PASCAL VOCs and ImageNet. However, in
many real applications, the acquired images may contain certain degradations
that lead to various kinds of blurring, noise, and distortions. One important
and interesting problem is the effect of such degradations to the performance
of CNN-based image classification. More specifically, we wonder whether
image-classification performance drops with each kind of degradation, whether
this drop can be avoided by including degraded images into training, and
whether existing computer vision algorithms that attempt to remove such
degradations can help improve the image-classification performance. In this
paper, we empirically study this problem for four kinds of degraded images --
hazy images, underwater images, motion-blurred images and fish-eye images. For
this study, we synthesize a large number of such degraded images by applying
respective physical models to the clear natural images and collect a new hazy
image dataset from the Internet. We expect this work can draw more interests
from the community to study the classification of degraded images
Classifying degraded images over various levels of degradation
Classification for degraded images having various levels of degradation is
very important in practical applications. This paper proposes a convolutional
neural network to classify degraded images by using a restoration network and
an ensemble learning. The results demonstrate that the proposed network can
classify degraded images over various levels of degradation well. This paper
also reveals how the image-quality of training data for a classification
network affects the classification performance of degraded images.Comment: Accepted by the 27th IEEE International Conference on Image
Processing (ICIP 2020
AQuA: Analytical Quality Assessment for Optimizing Video Analytics Systems
Millions of cameras at edge are being deployed to power a variety of
different deep learning applications. However, the frames captured by these
cameras are not always pristine - they can be distorted due to lighting issues,
sensor noise, compression etc. Such distortions not only deteriorate visual
quality, they impact the accuracy of deep learning applications that process
such video streams. In this work, we introduce AQuA, to protect application
accuracy against such distorted frames by scoring the level of distortion in
the frames. It takes into account the analytical quality of frames, not the
visual quality, by learning a novel metric, classifier opinion score, and uses
a lightweight, CNN-based, object-independent feature extractor. AQuA accurately
scores distortion levels of frames and generalizes to multiple different deep
learning applications. When used for filtering poor quality frames at edge, it
reduces high-confidence errors for analytics applications by 17%. Through
filtering, and due to its low overhead (14ms), AQuA can also reduce computation
time and average bandwidth usage by 25%