171 research outputs found
UG^2: a Video Benchmark for Assessing the Impact of Image Restoration and Enhancement on Automatic Visual Recognition
Advances in image restoration and enhancement techniques have led to
discussion about how such algorithmscan be applied as a pre-processing step to
improve automatic visual recognition. In principle, techniques like deblurring
and super-resolution should yield improvements by de-emphasizing noise and
increasing signal in an input image. But the historically divergent goals of
the computational photography and visual recognition communities have created a
significant need for more work in this direction. To facilitate new research,
we introduce a new benchmark dataset called UG^2, which contains three
difficult real-world scenarios: uncontrolled videos taken by UAVs and manned
gliders, as well as controlled videos taken on the ground. Over 160,000
annotated frames forhundreds of ImageNet classes are available, which are used
for baseline experiments that assess the impact of known and unknown image
artifacts and other conditions on common deep learning-based object
classification approaches. Further, current image restoration and enhancement
techniques are evaluated by determining whether or not theyimprove baseline
classification performance. Results showthat there is plenty of room for
algorithmic innovation, making this dataset a useful tool going forward.Comment: Supplemental material: https://goo.gl/vVM1xe, Dataset:
https://goo.gl/AjA6En, CVPR 2018 Prize Challenge: ug2challenge.or
Stylizing Face Images via Multiple Exemplars
We address the problem of transferring the style of a headshot photo to face
images. Existing methods using a single exemplar lead to inaccurate results
when the exemplar does not contain sufficient stylized facial components for a
given photo. In this work, we propose an algorithm to stylize face images using
multiple exemplars containing different subjects in the same style. Patch
correspondences between an input photo and multiple exemplars are established
using a Markov Random Field (MRF), which enables accurate local energy transfer
via Laplacian stacks. As image patches from multiple exemplars are used, the
boundaries of facial components on the target image are inevitably
inconsistent. The artifacts are removed by a post-processing step using an
edge-preserving filter. Experimental results show that the proposed algorithm
consistently produces visually pleasing results.Comment: In CVIU 2017. Project Page:
http://www.cs.cityu.edu.hk/~yibisong/cviu17/index.htm
Joint Face Hallucination and Deblurring via Structure Generation and Detail Enhancement
We address the problem of restoring a high-resolution face image from a
blurry low-resolution input. This problem is difficult as super-resolution and
deblurring need to be tackled simultaneously. Moreover, existing algorithms
cannot handle face images well as low-resolution face images do not have much
texture which is especially critical for deblurring. In this paper, we propose
an effective algorithm by utilizing the domain-specific knowledge of human
faces to recover high-quality faces. We first propose a facial component guided
deep Convolutional Neural Network (CNN) to restore a coarse face image, which
is denoted as the base image where the facial component is automatically
generated from the input face image. However, the CNN based method cannot
handle image details well. We further develop a novel exemplar-based detail
enhancement algorithm via facial component matching. Extensive experiments show
that the proposed method outperforms the state-of-the-art algorithms both
quantitatively and qualitatively.Comment: In IJCV 201
- …