293 research outputs found
Discriminative Transfer Learning for General Image Restoration
Recently, several discriminative learning approaches have been proposed for
effective image restoration, achieving convincing trade-off between image
quality and computational efficiency. However, these methods require separate
training for each restoration task (e.g., denoising, deblurring, demosaicing)
and problem condition (e.g., noise level of input images). This makes it
time-consuming and difficult to encompass all tasks and conditions during
training. In this paper, we propose a discriminative transfer learning method
that incorporates formal proximal optimization and discriminative learning for
general image restoration. The method requires a single-pass training and
allows for reuse across various problems and conditions while achieving an
efficiency comparable to previous discriminative approaches. Furthermore, after
being trained, our model can be easily transferred to new likelihood terms to
solve untrained tasks, or be combined with existing priors to further improve
image restoration quality
A Compressive Multi-Mode Superresolution Display
Compressive displays are an emerging technology exploring the co-design of
new optical device configurations and compressive computation. Previously,
research has shown how to improve the dynamic range of displays and facilitate
high-quality light field or glasses-free 3D image synthesis. In this paper, we
introduce a new multi-mode compressive display architecture that supports
switching between 3D and high dynamic range (HDR) modes as well as a new
super-resolution mode. The proposed hardware consists of readily-available
components and is driven by a novel splitting algorithm that computes the pixel
states from a target high-resolution image. In effect, the display pixels
present a compressed representation of the target image that is perceived as a
single, high resolution image.Comment: Technical repor
DeepVoxels: Learning Persistent 3D Feature Embeddings
In this work, we address the lack of 3D understanding of generative neural
networks by introducing a persistent 3D feature embedding for view synthesis.
To this end, we propose DeepVoxels, a learned representation that encodes the
view-dependent appearance of a 3D scene without having to explicitly model its
geometry. At its core, our approach is based on a Cartesian 3D grid of
persistent embedded features that learn to make use of the underlying 3D scene
structure. Our approach combines insights from 3D geometric computer vision
with recent advances in learning image-to-image mappings based on adversarial
loss functions. DeepVoxels is supervised, without requiring a 3D reconstruction
of the scene, using a 2D re-rendering loss and enforces perspective and
multi-view geometry in a principled manner. We apply our persistent 3D scene
representation to the problem of novel view synthesis demonstrating
high-quality results for a variety of challenging scenes.Comment: Video: https://www.youtube.com/watch?v=HM_WsZhoGXw Supplemental
material:
https://drive.google.com/file/d/1BnZRyNcVUty6-LxAstN83H79ktUq8Cjp/view?usp=sharing
Code: https://github.com/vsitzmann/deepvoxels Project page:
https://vsitzmann.github.io/deepvoxels
- …