4,777 research outputs found
Examining Autoexposure for Challenging Scenes
Autoexposure (AE) is a critical step applied by camera systems to ensure
properly exposed images. While current AE algorithms are effective in well-lit
environments with constant illumination, these algorithms still struggle in
environments with bright light sources or scenes with abrupt changes in
lighting. A significant hurdle in developing new AE algorithms for challenging
environments, especially those with time-varying lighting, is the lack of
suitable image datasets. To address this issue, we have captured a new 4D
exposure dataset that provides a large solution space (i.e., shutter speed
range from (1/500 to 15 seconds) over a temporal sequence with moving objects,
bright lights, and varying lighting. In addition, we have designed a software
platform to allow AE algorithms to be used in a plug-and-play manner with the
dataset. Our dataset and associate platform enable repeatable evaluation of
different AE algorithms and provide a much-needed starting point to develop
better AE methods. We examine several existing AE strategies using our dataset
and show that most users prefer a simple saliency method for challenging
lighting conditions.Comment: ICCV 202
DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks
Despite a rapid rise in the quality of built-in smartphone cameras, their
physical limitations - small sensor size, compact lenses and the lack of
specific hardware, - impede them to achieve the quality results of DSLR
cameras. In this work we present an end-to-end deep learning approach that
bridges this gap by translating ordinary photos into DSLR-quality images. We
propose learning the translation function using a residual convolutional neural
network that improves both color rendition and image sharpness. Since the
standard mean squared loss is not well suited for measuring perceptual image
quality, we introduce a composite perceptual error function that combines
content, color and texture losses. The first two losses are defined
analytically, while the texture loss is learned in an adversarial fashion. We
also present DPED, a large-scale dataset that consists of real photos captured
from three different phones and one high-end reflex camera. Our quantitative
and qualitative assessments reveal that the enhanced image quality is
comparable to that of DSLR-taken photos, while the methodology is generalized
to any type of digital camera
- …