Article thumbnail

Interactive light source position estimation for augmented reality with an RGB-D camera

By Bastiaan J. Boom, Sergio Orts-Escolano, Xin X. Ning, Steven McDonagh, Peter Sandilands and Robert B. Fisher


The first hybrid CPU-GPU based method for estimating a point light source position in a scene recorded by an RGB-D camera is presented. The image and depth information from the Kinect is enough to estimate a light position in a scene, which allows for the rendering of synthetic objects into a scene that appears realistic enough for augmented reality purposes. This method does not require a light probe or other physical device. To make this method suitable for augmented reality, we developed a hybrid implementation that performs light estimation in under 1second. This is sufficient for most augmented reality scenarios because both the position of the light source and the position of the Kinect are typically fixed. The method is able to estimate the angle of the light source with an average error of 20°. By rendering synthetic objects into the recorded scene, we illustrate that this accuracy is good enough for the rendered objects to look realistic.This work is partially supported by the Fish4Knowledge project, which is funded by the European Union 7th Framework Programme [FP7/2007-2013], by the HiPEAC Network of Excellence, by the Valencian Government grant BEFPI/2012/056 and by EPSRC (EP/P504902/1, EP/H012338/1)

Topics: Light source estimation, Augmented reality, GPU implementation, RGB-D camera, Ciencia de la Computación e Inteligencia Artificial
Publisher: 'Wiley'
Year: 2017
DOI identifier: 10.1002/cav.1686
OAI identifier:
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • (external link)
  • Suggested articles

    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.