Skip to main content
Article thumbnail
Location of Repository

Depth Acquisition from Digital Images



Introduction: Depth acquisition from digital images captured with a conventional camera, by analysing focus/defocus cues which are related to depth via an optical model of the camera, is a popular approach to depth-mapping a 3D scene. The majority of methods analyse the neighbourhood of a point in an image to infer its depth, which has disadvantages. A more elegant, but more difficult, solution is to evaluate only the single pixel displaying a point in order to infer its depth. This thesis investigates if a per-pixel method can be implemented without compromising accuracy and generality compared to window-based methods, whilst minimising the number of input images.\ud \ud Method: A geometric optical model of the camera was used to predict the relationship between focus/defocus and intensity at a pixel. Using input images with different focus settings, the relationship was used to identify the focal plane depth (i.e. focus setting) where a point is in best focus, from which the depth of the point can be resolved if camera parameters are known. Two metrics were implemented, one to identify the best focus setting for a point from the discrete input set, and one to fit a model to the input data to estimate the depth of perfect focus of the point on a continuous scale.\ud \ud Results: The method gave generally accurate results for a simple synthetic test scene, with a relatively low number of input images compared to similar methods. When tested on a more complex scene, the method achieved its objectives of separating complex objects from the background by depth, and produced a similar resolution of a complex 3D surface as a similar method which used significantly more input data.\ud \ud Conclusions: The method demonstrates that it is possible to resolve depth on a per-pixel basis without compromising accuracy and generality, and using a similar amount of input data, compared to more traditional window-based methods. In practice, the presented method offers a convenient new option for depth-based image processing applications, as the depth-map is per-pixel, but the process of capturing and preparing images for the method is not too practically cumbersome and could be easily automated unlike other per-pixel methods reviewed. However, the method still suffers from the general limitations of the depth acquisition approach using images from a conventional camera, which limits its use as a general depth acquisition solution beyond specifically depth-based image processing applications

Topics: Depth, Digital Image, Camera, Depth from Focus, Depth from Defocus, Geometric Optical Model, Per-pixel, Focus Settings
Year: 2011
OAI identifier:
Provided by: Durham e-Theses

Suggested articles


  1. (2005). A geometric approach to shape from defocus. doi
  2. (2007). A layer-based restoration framework for variable aperture photography. doi
  3. (1995). A new paradigm for imaging systems. doi
  4. (1987). A new sense for depth of field. doi
  5. (1989). A simple, real-time range camera. doi
  6. (1995). A Variational Approach to Depth from Defocus, doi
  7. (1997). A variational approach to recovering depth from defocused images. doi
  8. (2002). A variational approach to shape from defocus. In doi
  9. (1970). Accomodation in Computer Vision,
  10. (2007). Active refocusing of images and videos. doi
  11. (1993). An investigation of methods for determining depth from focus. doi
  12. (2007). Blind image deconvolution: theory and applications, doi
  13. (1978). Coded aperture imaging with uniformly redundant rays. doi
  14. (2006). Confocal Stereo. doi
  15. (2000). Depth from defocus vs. stereo: How different really are they? doi
  16. (1994). Depth from defocus: A spatial domain approach. doi
  17. (2006). Depth from diffracted rotation. doi
  18. (1989). Depth from focus of structured light. doi
  19. (1993). Depth from Focusing and Defocusing. doi
  20. (1998). Depth measurement by the multi-focus camera. doi
  21. (2009). Detail Recovery for Single-image Defocus Blur. doi
  22. Dynamic lens compensation for active color imaging and constant magnification focusing.
  23. (1998). Edge and depth from focus.
  24. (2007). Flash cut: Foreground extraction with flash/no-flash image pairs. doi
  25. (1993). Focussing Techniques, doi
  26. (2004). Generating omnifocus images using graph cuts and a new focus measure. doi
  27. (2008). Geometric and Trigonometric Optics. doi
  28. (2007). Image and Depth from a Conventional Camera with a Coded Aperture. In doi
  29. (2007). Introduction to Optics. doi
  30. (2007). Multi-aperture photography. doi
  31. (2003). Observing shape from defocused images. doi
  32. (2000). On generating seamless mosaics with large depth of field. doi
  33. (1997). One-shot active 3d image capture. doi
  34. (1988). Parallel depth recovery by changing camera parameters. doi
  35. (2006). Pattern Recognition and Machine Learning. doi
  36. (1988). Pyramid based depth from focus. doi
  37. (1998). Range estimation by optical differentiation. doi
  38. (1998). Rational filters for passive depth from defocus. doi
  39. (1996). Real-time focus range sensor. doi
  40. (2006). Removing camera shake from a single photograph. doi
  41. (1992). Robust focus ranging. doi
  42. (1992). Shape from focus system. doi
  43. (1990). Shape from focus: An effective approach for rough surfaces. doi
  44. (1994). Single-lens single-image incoherent passiveranging systems. doi
  45. (1998). Special issue on blind system identification and estimation.
  46. (1990). System for ascertaining direction of blur in a rangefrom-defocus camera.
  47. (2007). Total variation blind deconvolution using a variational approach to parameter, image, and blur estimation, in EUSIPCO, doi
  48. (2008). Variable-Aperture Photography. doi
  49. (2009). Variational Bayesian Blind Deconvolution Using a Total Variation Prior. doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.