2,902 research outputs found
How is Gaze Influenced by Image Transformations? Dataset and Model
Data size is the bottleneck for developing deep saliency models, because
collecting eye-movement data is very time consuming and expensive. Most of
current studies on human attention and saliency modeling have used high quality
stereotype stimuli. In real world, however, captured images undergo various
types of transformations. Can we use these transformations to augment existing
saliency datasets? Here, we first create a novel saliency dataset including
fixations of 10 observers over 1900 images degraded by 19 types of
transformations. Second, by analyzing eye movements, we find that observers
look at different locations over transformed versus original images. Third, we
utilize the new data over transformed images, called data augmentation
transformation (DAT), to train deep saliency models. We find that label
preserving DATs with negligible impact on human gaze boost saliency prediction,
whereas some other DATs that severely impact human gaze degrade the
performance. These label preserving valid augmentation transformations provide
a solution to enlarge existing saliency datasets. Finally, we introduce a novel
saliency model based on generative adversarial network (dubbed GazeGAN). A
modified UNet is proposed as the generator of the GazeGAN, which combines
classic skip connections with a novel center-surround connection (CSC), in
order to leverage multi level features. We also propose a histogram loss based
on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in
terms of luminance distribution. Extensive experiments and comparisons over 3
datasets indicate that GazeGAN achieves the best performance in terms of
popular saliency evaluation metrics, and is more robust to various
perturbations. Our code and data are available at:
https://github.com/CZHQuality/Sal-CFS-GAN
Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
We present a deep neural network-based approach to image quality assessment
(IQA). The network is trained end-to-end and comprises ten convolutional layers
and five pooling layers for feature extraction, and two fully connected layers
for regression, which makes it significantly deeper than related IQA models.
Unique features of the proposed architecture are that: 1) with slight
adaptations it can be used in a no-reference (NR) as well as in a
full-reference (FR) IQA setting and 2) it allows for joint learning of local
quality and local weights, i.e., relative importance of local quality to the
global quality estimate, in an unified framework. Our approach is purely
data-driven and does not rely on hand-crafted features or other types of prior
domain knowledge about the human visual system or image statistics. We evaluate
the proposed approach on the LIVE, CISQ, and TID2013 databases as well as the
LIVE In the wild image quality challenge database and show superior performance
to state-of-the-art NR and FR IQA methods. Finally, cross-database evaluation
shows a high ability to generalize between different databases, indicating a
high robustness of the learned features
Enhanced Augmented Reality Framework for Sports Entertainment Applications
Augmented Reality (AR) superimposes virtual information on real-world data, such as displaying useful information on videos/images of a scene. This dissertation presents an Enhanced AR (EAR) framework for displaying useful information on images of a sports game. The challenge in such applications is robust object detection and recognition. This is even more challenging when there is strong sunlight. We address the phenomenon where a captured image is degraded by strong sunlight.
The developed framework consists of an image enhancement technique to improve the accuracy of subsequent player and face detection. The image enhancement is followed by player detection, face detection, recognition of players, and display of personal information of players. First, an algorithm based on Multi-Scale Retinex (MSR) is proposed for image enhancement. For the tasks of player and face detection, we use adaptive boosting algorithm with Haar-like features for both feature selection and classification. The player face recognition algorithm uses adaptive boosting with the LDA for feature selection and nearest neighbor classifier for classification. The framework can be deployed in any sports where a viewer captures images. Display of players-specific information enhances the end-user experience. Detailed experiments are performed on 2096 diverse images captured using a digital camera and smartphone. The images contain players in different poses, expressions, and illuminations. Player face recognition module requires players faces to be frontal or up to ?350 of pose variation. The work demonstrates the great potential of computer vision based approaches for future development of AR applications.COMSATS Institute of Information Technolog
Domain Fingerprints for No-reference Image Quality Assessment
Human fingerprints are detailed and nearly unique markers of human identity.
Such a unique and stable fingerprint is also left on each acquired image. It
can reveal how an image was degraded during the image acquisition procedure and
thus is closely related to the quality of an image. In this work, we propose a
new no-reference image quality assessment (NR-IQA) approach called domain-aware
IQA (DA-IQA), which for the first time introduces the concept of domain
fingerprint to the NR-IQA field. The domain fingerprint of an image is learned
from image collections of different degradations and then used as the unique
characteristics to identify the degradation sources and assess the quality of
the image. To this end, we design a new domain-aware architecture, which
enables simultaneous determination of both the distortion sources and the
quality of an image. With the distortion in an image better characterized, the
image quality can be more accurately assessed, as verified by extensive
experiments, which show that the proposed DA-IQA performs better than almost
all the compared state-of-the-art NR-IQA methods.Comment: accepted by IEEE Transactions on Circuits and Systems for Video
Technology (TCSVT
Visual Speech-Aware Perceptual 3D Facial Expression Reconstruction from Videos
The recent state of the art on monocular 3D face reconstruction from image
data has made some impressive advancements, thanks to the advent of Deep
Learning. However, it has mostly focused on input coming from a single RGB
image, overlooking the following important factors: a) Nowadays, the vast
majority of facial image data of interest do not originate from single images
but rather from videos, which contain rich dynamic information. b) Furthermore,
these videos typically capture individuals in some form of verbal communication
(public talks, teleconferences, audiovisual human-computer interactions,
interviews, monologues/dialogues in movies, etc). When existing 3D face
reconstruction methods are applied in such videos, the artifacts in the
reconstruction of the shape and motion of the mouth area are often severe,
since they do not match well with the speech audio.
To overcome the aforementioned limitations, we present the first method for
visual speech-aware perceptual reconstruction of 3D mouth expressions. We do
this by proposing a "lipread" loss, which guides the fitting process so that
the elicited perception from the 3D reconstructed talking head resembles that
of the original video footage. We demonstrate that, interestingly, the lipread
loss is better suited for 3D reconstruction of mouth movements compared to
traditional landmark losses, and even direct 3D supervision. Furthermore, the
devised method does not rely on any text transcriptions or corresponding audio,
rendering it ideal for training in unlabeled datasets. We verify the efficiency
of our method through exhaustive objective evaluations on three large-scale
datasets, as well as subjective evaluation with two web-based user studies
- …