151 research outputs found
Convolutional Color Constancy
Color constancy is the problem of inferring the color of the light that
illuminated a scene, usually so that the illumination color can be removed.
Because this problem is underconstrained, it is often solved by modeling the
statistical regularities of the colors of natural objects and illumination. In
contrast, in this paper we reformulate the problem of color constancy as a 2D
spatial localization task in a log-chrominance space, thereby allowing us to
apply techniques from object detection and structured prediction to the color
constancy problem. By directly learning how to discriminate between correctly
white-balanced images and poorly white-balanced images, our model is able to
improve performance on standard benchmarks by nearly 40%
Aperture Supervision for Monocular Depth Estimation
We present a novel method to train machine learning algorithms to estimate
scene depths from a single image, by using the information provided by a
camera's aperture as supervision. Prior works use a depth sensor's outputs or
images of the same scene from alternate viewpoints as supervision, while our
method instead uses images from the same viewpoint taken with a varying camera
aperture. To enable learning algorithms to use aperture effects as supervision,
we introduce two differentiable aperture rendering functions that use the input
image and predicted depths to simulate the depth-of-field effects caused by
real camera apertures. We train a monocular depth estimation network end-to-end
to predict the scene depths that best explain these finite aperture images as
defocus-blurred renderings of the input all-in-focus image.Comment: To appear at CVPR 2018 (updated to camera ready version
Burst Denoising with Kernel Prediction Networks
We present a technique for jointly denoising bursts of images taken from a
handheld camera. In particular, we propose a convolutional neural network
architecture for predicting spatially varying kernels that can both align and
denoise frames, a synthetic data generation approach based on a realistic noise
formation model, and an optimization guided by an annealed loss function to
avoid undesirable local minima. Our model matches or outperforms the
state-of-the-art across a wide range of noise levels on both real and synthetic
data.Comment: To appear in CVPR 2018 (spotlight). Project page:
http://people.eecs.berkeley.edu/~bmild/kpn
- …