23,058 research outputs found
Separating Reflection and Transmission Images in the Wild
The reflections caused by common semi-reflectors, such as glass windows, can
impact the performance of computer vision algorithms. State-of-the-art methods
can remove reflections on synthetic data and in controlled scenarios. However,
they are based on strong assumptions and do not generalize well to real-world
images. Contrary to a common misconception, real-world images are challenging
even when polarization information is used. We present a deep learning approach
to separate the reflected and the transmitted components of the recorded
irradiance, which explicitly uses the polarization properties of light. To
train it, we introduce an accurate synthetic data generation pipeline, which
simulates realistic reflections, including those generated by curved and
non-ideal surfaces, non-static scenes, and high-dynamic-range scenes.Comment: accepted at ECCV 201
A CMOS-Based Lab-on-Chip Array for Combined Magnetic Manipulation and Opto-Chemical Sensing
Accepted versio
Learning to Generate Time-Lapse Videos Using Multi-Stage Dynamic Generative Adversarial Networks
Taking a photo outside, can we predict the immediate future, e.g., how would
the cloud move in the sky? We address this problem by presenting a generative
adversarial network (GAN) based two-stage approach to generating realistic
time-lapse videos of high resolution. Given the first frame, our model learns
to generate long-term future frames. The first stage generates videos of
realistic contents for each frame. The second stage refines the generated video
from the first stage by enforcing it to be closer to real videos with regard to
motion dynamics. To further encourage vivid motion in the final generated
video, Gram matrix is employed to model the motion more precisely. We build a
large scale time-lapse dataset, and test our approach on this new dataset.
Using our model, we are able to generate realistic videos of up to resolution for 32 frames. Quantitative and qualitative experiment results
have demonstrated the superiority of our model over the state-of-the-art
models.Comment: To appear in Proceedings of CVPR 201
A Decoupled 3D Facial Shape Model by Adversarial Training
Data-driven generative 3D face models are used to compactly encode facial
shape data into meaningful parametric representations. A desirable property of
these models is their ability to effectively decouple natural sources of
variation, in particular identity and expression. While factorized
representations have been proposed for that purpose, they are still limited in
the variability they can capture and may present modeling artifacts when
applied to tasks such as expression transfer. In this work, we explore a new
direction with Generative Adversarial Networks and show that they contribute to
better face modeling performances, especially in decoupling natural factors,
while also achieving more diverse samples. To train the model we introduce a
novel architecture that combines a 3D generator with a 2D discriminator that
leverages conventional CNNs, where the two components are bridged by a geometry
mapping layer. We further present a training scheme, based on auxiliary
classifiers, to explicitly disentangle identity and expression attributes.
Through quantitative and qualitative results on standard face datasets, we
illustrate the benefits of our model and demonstrate that it outperforms
competing state of the art methods in terms of decoupling and diversity.Comment: camera-ready version for ICCV'1
- …