58,877 research outputs found
Understanding How Image Quality Affects Deep Neural Networks
Image quality is an important practical challenge that is often overlooked in
the design of machine vision systems. Commonly, machine vision systems are
trained and tested on high quality image datasets, yet in practical
applications the input images can not be assumed to be of high quality.
Recently, deep neural networks have obtained state-of-the-art performance on
many machine vision tasks. In this paper we provide an evaluation of 4
state-of-the-art deep neural network models for image classification under
quality distortions. We consider five types of quality distortions: blur,
noise, contrast, JPEG, and JPEG2000 compression. We show that the existing
networks are susceptible to these quality distortions, particularly to blur and
noise. These results enable future work in developing deep neural networks that
are more invariant to quality distortions.Comment: Final version will appear in IEEE Xplore in the Proceedings of the
Conference on the Quality of Multimedia Experience (QoMEX), June 6-8, 201
Augmented Reality Meets Computer Vision : Efficient Data Generation for Urban Driving Scenes
The success of deep learning in computer vision is based on availability of
large annotated datasets. To lower the need for hand labeled images, virtually
rendered 3D worlds have recently gained popularity. Creating realistic 3D
content is challenging on its own and requires significant human effort. In
this work, we propose an alternative paradigm which combines real and synthetic
data for learning semantic instance segmentation and object detection models.
Exploiting the fact that not all aspects of the scene are equally important for
this task, we propose to augment real-world imagery with virtual objects of the
target category. Capturing real-world images at large scale is easy and cheap,
and directly provides real background appearances without the need for creating
complex 3D models of the environment. We present an efficient procedure to
augment real images with virtual objects. This allows us to create realistic
composite images which exhibit both realistic background appearance and a large
number of complex object arrangements. In contrast to modeling complete 3D
environments, our augmentation approach requires only a few user interactions
in combination with 3D shapes of the target object. Through extensive
experimentation, we conclude the right set of parameters to produce augmented
data which can maximally enhance the performance of instance segmentation
models. Further, we demonstrate the utility of our approach on training
standard deep models for semantic instance segmentation and object detection of
cars in outdoor driving scenes. We test the models trained on our augmented
data on the KITTI 2015 dataset, which we have annotated with pixel-accurate
ground truth, and on Cityscapes dataset. Our experiments demonstrate that
models trained on augmented imagery generalize better than those trained on
synthetic data or models trained on limited amount of annotated real data
How Image Degradations Affect Deep CNN-based Face Recognition?
Face recognition approaches that are based on deep convolutional neural
networks (CNN) have been dominating the field. The performance improvements
they have provided in the so called in-the-wild datasets are significant,
however, their performance under image quality degradations have not been
assessed, yet. This is particularly important, since in real-world face
recognition applications, images may contain various kinds of degradations due
to motion blur, noise, compression artifacts, color distortions, and occlusion.
In this work, we have addressed this problem and analyzed the influence of
these image degradations on the performance of deep CNN-based face recognition
approaches using the standard LFW closed-set identification protocol. We have
evaluated three popular deep CNN models, namely, the AlexNet, VGG-Face, and
GoogLeNet. Results have indicated that blur, noise, and occlusion cause a
significant decrease in performance, while deep CNN models are found to be
robust to distortions, such as color distortions and change in color balance.Comment: 8 pages, 3 figure
A Style-Based Generator Architecture for Generative Adversarial Networks
We propose an alternative generator architecture for generative adversarial
networks, borrowing from style transfer literature. The new architecture leads
to an automatically learned, unsupervised separation of high-level attributes
(e.g., pose and identity when trained on human faces) and stochastic variation
in the generated images (e.g., freckles, hair), and it enables intuitive,
scale-specific control of the synthesis. The new generator improves the
state-of-the-art in terms of traditional distribution quality metrics, leads to
demonstrably better interpolation properties, and also better disentangles the
latent factors of variation. To quantify interpolation quality and
disentanglement, we propose two new, automated methods that are applicable to
any generator architecture. Finally, we introduce a new, highly varied and
high-quality dataset of human faces.Comment: CVPR 2019 final versio
- …