Skip to main content
Article thumbnail
Location of Repository

Accurate depth from defocus estimation with video-rate implementation

By Alex Noel Joseph Raj

Abstract

The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. \ud The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. \ud The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. \ud The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

Topics: TA, TR
OAI identifier: oai:wrap.warwick.ac.uk:2791

Suggested articles

Citations

  1. (2006). 3-d shape estimation and image restoration: Exploiting defocus and motion-blur, doi
  2. (2003). 3d shape from anisotropic diffusion, doi
  3. (2003). A bin picking system based on depth from doi
  4. (1972). A class of algorithms for fast digital image registration, doi
  5. (2005). A geometric approach to shape from defocus, doi
  6. (2008). A New Approach to 3D Shape Recovery of Local Planar Surface Patches from Shift-Variant Blurred Images, doi
  7. (1987). A new sense for depth of field, doi
  8. A Perspective on Range Finding Techniques for Computer Vision, doi
  9. (1989). A simple, real-time range camera, doi
  10. (2004). A sub-pixel correspondence search technique for computer vision applications,
  11. (1997). A Variational Approach to Recovering Depth From Defocused Images, doi
  12. (2002). A Variational Approach to Shape From Defocus, doi
  13. (2001). A video-rate sensor based on depth from defocus, doi
  14. (2008). A.Duci, A Theory of Defocus via Fourier Analysis, doi
  15. (1995). An accurate recovery of 3D shape from image focus, doi
  16. (1996). An FFT based technique for translation, rotation and scale - invariant image registration, doi
  17. (1993). An investigation of methods for determining depth from defocus, doi
  18. (1999). An MRF model-based approach to simultaneous recovery of depth and restoration from defocused images, doi
  19. (1975). and E.Wolf, Principles of Optics, doi
  20. (1988). Applied Photographic Optics, Focal Press, London and
  21. (1994). Automatic modelling of 3d natural objects from multiple images, doi
  22. (1964). Binocular depth perception without familiarity cues, doi
  23. (1979). Computational approaches to image understanding,
  24. (1976). Cooperative Computation of Stereo Disparity, doi
  25. (1992). Deblurring subject to nonnegativity constraints, doi
  26. (2004). Depth estimation based on thick oriented edges in images, doi
  27. (1994). Depth estimation from image defocus using fuzzy logic, doi
  28. (1980). Depth from camera motion in a real world scene, doi
  29. (1992). Depth from defocus and rapid auto focussing: a practical approach, doi
  30. (2006). Depth from defocus by zooming using thin lens-based zoom model, doi
  31. Depth from Defocus using Radial Basis Function Networks, doi
  32. (2001). Depth from defocus-estimation in spatial domain, doi
  33. (1999). Depth from defocus: a real aperture imaging approach, doi
  34. (1994). Depth from defocus: spatial domain approach, doi
  35. (1987). Depth from Focus,
  36. (2006). Depth Perception from three blurred images, doi
  37. (1988). Depth recovery from blurred edges, doi
  38. (2008). Depth recovery using defocus blur at infinity, doi
  39. (1980). Edge-based Stereo Correlation,
  40. (2007). Estimation of Image Magnification using Phase doi
  41. (2002). Extension of Phase Correlation to sub-pixel registration, doi
  42. (2007). Fast implementation of generalized median filter, doi
  43. (1976). Focus optimization criteria for computer image processing,
  44. (1995). G.Surya, Focused image recovery from two defocused images recorded with different camera setting, doi
  45. (2002). Homotopy-based estimation of depth cues in spatial domain, doi
  46. (1992). I.Weiss, Smoothed differentiation filters for images, doi
  47. (2006). Image based calibration of spatial domain depth from defocus and application to automatic focus tracking, doi
  48. (2003). Improved estimation of defocus blur and spatial shifts in spatial domain: homotopy-based approach, doi
  49. (1997). Integration of defocus and focus analysis with stereo for 3d shape recovery, doi
  50. (2005). Intergration of multiresolution image segmentation and neural networks for object depth recovery, doi
  51. (2002). Learning Shape from Defocus, doi
  52. (2008). Local Hull-Based Surface Construction of Volumetric Data from Silhouettes, doi
  53. (2008). Localized and Computationally Efficient Approach to Shift-variant Image Deblurring, doi
  54. (1987). Marching cubes: A high resolution 3D surface reconstruction algorithm, doi
  55. (2008). Measurement of point spread function of a noisy imaging system, doi
  56. (1994). Microscopic Shape from Focus Using Active Illumination, doi
  57. (1995). Minimal operator set for texture invariant depth from defocus, doi
  58. (1994). Modelling and Calibration of automated zoom lens, doi
  59. (1997). Moment and Hypergeometric Filters for High Precision Computation of Focus, Stereo and Optical Flow, doi
  60. (1995). Moment filters for high precision computation of focus and stereo, doi
  61. (1979). Motion and structure from optical flow, doi
  62. (2000). N.Kiryati, Depth from Defocus vs. Stereo: How Different Really are They?, doi
  63. (2005). Novel fpga-based implementation of median and weighted median filters for image processing, doi
  64. (2003). Observing shape from defocused images, doi
  65. (2007). On defocus, diffusion and depth estimation, doi
  66. (1995). On the behaviour of the Laplacian of Gaussian for junction models,
  67. (1988). Parallel depth recover by changing camera parameters, doi
  68. (1998). Passive depth from defocus using spatial domain approach, doi
  69. (1980). Point pattern matching by relaxation, doi
  70. (1988). Pyramid based depth from focus, doi
  71. (1993). Rapid octree construction from image sequence, doi
  72. (1998). Rational filters for passive depth from defocus, doi
  73. (1999). Real time 3D estimation using Depth from Defocus,
  74. (2008). Real time monocular Depth from Defocus, doi
  75. (1996). Real-time focus range sensor, doi
  76. (2008). Recent Trends doi
  77. (2008). Recovery of relative depth from single observation using uncalibrated (real- aperture) doi
  78. (1989). Registering Landsat images by point matching, doi
  79. (1987). Registration of translated and rotated images using Finite Fourier Transforms, doi
  80. (2008). Regularized depth from defocus doi
  81. (1986). Robot Vision, doi
  82. (1994). Scherock and B.Girod, Simple range cameras based on focal error, doi
  83. (2000). Shape and Radiance Estimation from Informationdivergence of Blurred Images, doi
  84. (1994). Shape from Focus, doi
  85. (1999). Shape from Shading: A Survey, doi
  86. (2005). Shape from silhouette: Image pixels for marching cubes,
  87. (2007). Shape recovery using stochastic heat flow, doi
  88. (2003). Simultaneous estimation of super-resolved scene and depth map from low resolution defocused observations, doi
  89. (1988). Surface reconstruction by dynamic integration of focus, camera vergence, and stereo, in doi
  90. (1992). Survey of image registration techniques, doi
  91. (2005). Tae-Sun Choi, 3D shape recovery from image defocus using Wavelet analysis, doi
  92. (2004). Tae-Sun Choi, Depth from defocus using Wavelet Transforms doi
  93. (1997). Telecentric Optics for focus, doi
  94. (1965). The Fourier transforms and its applications, McGraw-Hill Inc,
  95. (1975). The phase correlation image alignment method,
  96. (2006). The Reverse Projection Correlation Principle for Depth from Defocus, doi
  97. (1994). The visual hull concept for silhouettes-based image understanding, doi
  98. (1980). Theory of Edge Detection, doi
  99. (1990). Two Dimensional Imaging Signal and Image Processing,
  100. (1995). Two Dimensional Imaging, doi
  101. (2007). V.Kreinovich and V.Sinyansky, Images with Uncertainty: Efficients Algorithm for Shift, Rotation, Scaling and Registration, and their Application to Geosciences, Soft Computing in Image Processing, Recent Advances, doi
  102. Wavelet Transform in Depth Recovery doi
  103. (1991). Why least squares and maximum entropy? An axiomatic approach to linear inverse problems, doi

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.