Skip to main content
Article thumbnail
Location of Repository

Harvard University

By Trevor Owens, Kate Saenko, Ayan Chakrabarti, Ying Xiong, Todd Zickler and Trevor Darrell

Abstract

Color is known to be highly discriminative for many object recognition tasks, but is difficult to infer from uncontrolled images in which the illuminant is not known. Traditional methods for color constancy can improve surface reflectance estimates from such uncalibrated images, but their output depends significantly on the background scene. In many recognition and retrieval applications, we have access to image sets that contain multiple views of the same object in different environments; we show in this paper that correspondences between these images provide important constraints that can improve color constancy. We introduce the multi-view color constancy problem, and present a method to recover estimates of underlying surface reflectance based on joint estimation of these surface properties and the illuminants present in multiple images. The method can exploit image correspondences obtained by various alignment techniques, and we show examples based on matching local region features. Our results show that multiview constraints can significantly improve estimates of both scene illuminants and object color (surface reflectance) when compared to a baseline single-view method. 1

Year: 2013
OAI identifier: oai:CiteSeerX.psu:10.1.1.353.2042
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://vision.seas.harvard.edu... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.