Location of Repository

Colour depth-from-defocus incorporating experimental point spread function measurements

By Christopher David Claxton

Abstract

Depth-From-Defocus (DFD) is a monocular computer vision technique for creating\ud depth maps from two images taken on the same optical axis with different intrinsic camera\ud parameters. A pre-processing stage for optimally converting colour images to monochrome\ud using a linear combination of the colour planes has been shown to improve the\ud accuracy of the depth map. It was found that the first component formed using Principal\ud Component Analysis (PCA) and a technique to maximise the signal-to-noise ratio (SNR)\ud performed better than using an equal weighting of the colour planes with an additive noise\ud model. When the noise is non-isotropic the Mean Square Error (MSE) of the depth map\ud by maximising the SNR was improved by 7.8 times compared to an equal weighting and\ud 1.9 compared to PCA. The fractal dimension (FD) of a monochrome image gives a measure\ud of its roughness and an algorithm was devised to maximise its FD through colour\ud mixing. The formulation using a fractional Brownian motion (mm) model reduced the\ud SNR and thus produced depth maps that were less accurate than using PCA or an equal\ud weighting. An active DFD algorithm to reduce the image overlap problem has been\ud developed, called Localisation through Colour Mixing (LCM), that uses a projected colour\ud pattern. Simulation results showed that LCM produces a MSE 9.4 times lower than equal\ud weighting and 2.2 times lower than PCA.\ud The Point Spread Function (PSF) of a camera system models how a point source of\ud light is imaged. For depth maps to be accurately created using DFD a high-precision PSF\ud must be known. Improvements to a sub-sampled, knife-edge based technique are presented\ud that account for non-uniform illumination of the light box and this reduced the\ud MSE by 25%. The Generalised Gaussian is presented as a model of the PSF and shown to\ud be up to 16 times better than the conventional models of the Gaussian and pillbox

Topics: TA
OAI identifier: oai:wrap.warwick.ac.uk:4461

Suggested articles

Preview

Citations

  1. (1998). 0., Is the geometry of nature fractal?
  2. (2003). 1., Computer Vision: A Modern Approach. Upper Saddle River:
  3. (1989). A depth recovery algorithm using defocus informotion.
  4. (1992). A generalized depth estimation algorithm with a single image.
  5. (2005). A geometric approach to shape from defocus.
  6. (2000). A guide to the use and calibration of detector array equipment. Sira: Sira Electro-Optics Limited,
  7. (1991). A matrix based methodfor determining depthfromfocus.
  8. (1998). A moment-preserving approach for depthfrom defocus. Pattern Recognition,
  9. (1987). A new sense for depth of field.
  10. (1994). A review of methods used to determine the fractal dimension of linear features.
  11. (1995). A simple general and mathematically tractable way to sense depth in a single image.
  12. (1989). A simple, real-time range camera.
  13. (1988). A transformation for ordering multispectral data in terms of image quality with implications for noise removal.
  14. (2002). A variational approach to shape from defocus. In
  15. (2001). A video-rate range sensor based on depthfrom defocus.
  16. (1991). Active lens control for high precision computer imaging.
  17. (1993). An investigation of methods for determining depthfromfocus.
  18. Analysis 0/ a complex 0/ statistical variables into principal components.
  19. (1992). Application of spatial-domain convolution / deconvolution transform for determining distance from image defocus.
  20. (2005). Area detectors technology and optics -- relations to nature.
  21. Axial behaviour of Cantor ring diffractals.
  22. (2005). CCD characterization for a range of color cameras.
  23. CCD versus CMOS -- has CCD imaging come to an end? In
  24. (2001). CCD vs. CMOS: Facts andfiction. Photonics Spectra,
  25. (1991). Characterizing digital image acquisition devices.
  26. (1970). Charge coupled semiconductor devices.
  27. (2000). Chromatic aberration and depth extraction.
  28. (2000). Color Image Processing and Applications.
  29. (1990). Color reproduction test for CCD image sensors.
  30. (2006). Color separation in forensic image processing.
  31. (2002). Common principles of image acquisition systems and biological vision.
  32. (1992). Computer and Robot Vision.
  33. (2003). Computer derivations of numerical differentiation formulae.
  34. Condition numbers of Gaussian random matrices.
  35. Datasheetfor the ICX267AK CCD.
  36. (1999). Demosaicing: image reconstruction from color CCD samples.
  37. (1994). Depth estimation from image defocus usingfu::.::.y logic.
  38. (2002). Depth estimation from image structure.
  39. (1998). Depth from defocus vs. stereo: how different really are they?
  40. (1993). Depth from focus and defocusing.
  41. (1998). Depth measurement by the multi-focus camera.
  42. (1982). Depth of scene from depth offield.
  43. (1988). Depth recovery from blurred edges.
  44. (1990). Depth restoration from defocused images using simulated annealing.
  45. (1992). Depthfrom defocus and rapid autofocusing: a practical approach.
  46. (1993). Depthfrom defocus by changing camera aperture: a spatial domain approach.
  47. Depthfrom defocus estimation in spatial domain.
  48. (1998). Depthfrom Defocus: A Real Aperture Approach.
  49. (1994). Depthfrom defocus: a spatial domain approach.
  50. (1992). Depthfrom defocusing.
  51. (2003). Digital Color Imaging Handbook. Boca Raton:
  52. (2002). Digital Image Processing. Upper Saddle River:
  53. (2004). Digital Signal and Image Processing.
  54. (1987). Direct recovery of depth-map I: differential methods.
  55. (1989). Discretefourier transform based depth-from-focus. Image Understanding
  56. (2002). Dueling detectors. OE Magazine,
  57. (1998). Edge operator error estimation incorporating measurements ofCeD TV camera transfer function. lEE Proc.-Vis.
  58. (2001). Enhanced depth from defocus estimation: tolerance to spatial displacements.
  59. (1993). Entropy-based depth from focus.
  60. (1997). Estimatingfractal dimension with fractal interpolationjunction models.
  61. (2000). Estimation of 2-D noisy fractional Brownian motion and its applications using wavelets.
  62. (2001). Estimation of depth from defocus as polynomial system identification.
  63. (2002). Estimation of depth on thick edges from sharp and blurred images.
  64. (2001). et aI., Transatlantic robot-assisted telesurgery.
  65. (1991). Euclid - The Creation of Mathematics.
  66. (1991). Euclidean and fractal models for the description of rock surface roughness.
  67. (1989). Evaluating the fractal dimension of profiles.
  68. (1970). Experimental verification of the charge coupled device concept.
  69. (2002). Experiments with Mixtures.
  70. (2002). Focus and context taken literally.
  71. (1982). Focus on Vision. Cambridge:
  72. (1995). Focused image recovery from two defocused images recorded with different camera settings.
  73. (2003). Fractal 3-D modeling of asteroids using wavelets on arbitrary meshes.
  74. (1984). Fractal-based description o/natural scenes.
  75. (1988). Fractals in Nature: from Characterization to Simulation. In
  76. (1990). Fractals in the PhYSical Sciences. Manchester:
  77. (1990). Generation of laser speckle with an integrating sphere.
  78. (1995). Generation of simplex lattice points.
  79. (2001). Genetic Algorithms: Concepts and Designs. London: Sprmger-Verlag London Limited,
  80. (1924). Helmholtz's Treatise on Physiological Optics.
  81. (1995). Holographic elements for modulation transfer function testing of detector arrays.
  82. Home based 3D entertainment - an overview.
  83. (1967). How long is the coastline of Britain? Statistical self-similarity and fractional dimension. Science,
  84. (2004). Image denoising usingfractal and wavlet-based methods.
  85. (2000). Image Noise Models. In
  86. (1999). Image Processing, Analysis, and Machine Vision. Pacific Grove:
  87. (2005). Image processing: principles and applications. Hoboken:
  88. (1999). Image Processing: The Fundamentals.
  89. (1959). In/ormation Theory and Statistics.
  90. (2001). Influence of nonuniform charge-coupled device pixel response on aperture photometry.
  91. (1998). Integration of multiple-baseline color stereo vision with focus and defocus analysis for 3D shape measurement.
  92. (2005). Integration of multiresolution image segmentation and neural networks for object depth recovery. Pattern Recognition,
  93. (1994). Introductory Semiconductor Device Physics. Hemel Hempstead:
  94. (1987). Inverse problem theory: Methods for data fitting and model parameter estimation.
  95. (2001). Kernel PCAfor feature extraction and de-noising in nonlinear regression.
  96. (2002). Learning shape from defocus. In
  97. (1993). Lens practice: Choosing and using Leica lenes. East Sussex: Hove Books,
  98. (2005). Linear demosaicing inspired by the human visual system.
  99. (1989). Linear Integral Equations.
  100. (1988). Machine Vision for Inspection and Measurement.
  101. (1998). Mathematical control theory: deterministicjinite dimensional systems.
  102. (1999). Measurement of modulation transfer function of chargecoupled devices usingfrequency-variable sine grating patterns.
  103. (1995). Measurement of the modulation transfer function of infrared cameras.
  104. (1986). Method for measuring modulation transfer function of charge-coupled device using laser speckle. Optical Engineering,
  105. (2002). Missile Defense Technologies Tools to Counter Terrorism
  106. (1992). Modulation transfer function measurement technique for small-pixel detectors.
  107. (1990). Modulation transfer function of charge-coupled devices.
  108. (1993). Modulation transfer function testing of detector arrays using narrow-band laser speckle.
  109. (2003). Multi-objective super resolution: concepts and examples.
  110. (2002). Multifractal modelling and simulation of rain fields exhibiting spatial heterogeneity.
  111. (2006). Multiframe demosaicing and super-resolution of color images.
  112. (2000). Multiframe image restoration.
  113. (1963). Nanowatt logic usingfield-effect metal-oxide semiconductor triodes.
  114. (2005). Numerical differentiation o/noisy, nonsmooth data. Submitted,
  115. (1998). On approximating rough curves with fractal functions.
  116. (1901). On lines and planes 0/ closestfit to systems o/points in space.
  117. (1998). Optimal recovery of depth from defocused images using an MRF model.
  118. (1997). Optimal selection of camera parameters for recovery of depth from defocused images.
  119. (1988). Parallel depth recovery by changing camera parameters.
  120. (1999). Particle depth measurement based on depth-from-defocus.
  121. (1998). Passive depth from defocus using a spatial domain approach.
  122. (2001). Pattern Classification (2nd edition).
  123. (1977). Picture understanding by machine via texturalJeature extraction.
  124. (1999). Principal component analysis of remote sensing imagery: effects of additive and multiplicative noise.
  125. (2002). Principal Component Analysis.
  126. (1989). Probability and Statistics.
  127. (1988). Pyramid based depth from focus.
  128. (1985). Quantitative methods for analyzing the roughness of the seafloor.
  129. Radiometric CCD camera calibration and noise estimation :~::
  130. (1978). Random mosaic models Jor textures.
  131. (1995). Random transparency targets for modulation transfer function measurement in the visible and infrared regions.
  132. (1988). Range estimation by optical differentiation.
  133. (1995). Range measurement from defocus gradient.
  134. (1988). Rangefrom translational motion blurring.
  135. (1998). Rationalfiltersfor passive depth from defocus.
  136. (1999). Real time 3-D estimation using depthfrom defocus.
  137. (1996). Real-time focus range sensor.
  138. Real-time production and delivery of 3D media.
  139. (1994). Real-timefocus range sensor.
  140. (1986). Robot Vision.
  141. (1998). Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model.
  142. Sensor Architectures for Digital Cinematography.
  143. (2000). Shape and radiance estimation from the information-divergence of blurred images. In
  144. (1999). Shape recovery from blurred image using wavelet analysis.
  145. (1992). Shapefromfocus systemfor rough surfaces.
  146. (1994). Simple range cameras based on focal error.
  147. (1992). Single lens stereo with a plenoptic camera.
  148. (1997). Space-variant approaches to recovery of dep th from defocused images.
  149. (1986). Surface fractal dimension of small metallic particles.
  150. (1988). Surface reconstruction by dynamic integration of focus. camera f C t V' . n pp
  151. (1996). Telecentric optics for computational vision. In
  152. (1995). Telecentric optics for constant-magnification imaging.
  153. Telecentric optics for focus analysis.
  154. (1999). Texture analysis and segmentation o/images ~sing!"actals.
  155. Texture analysis o/images using principal component analysis.
  156. (1998). Texture Analysis. In
  157. (2003). Texture classification using spectral histograms.
  158. (2005). Texture classification with combined rotation and scale invariant wavelet features. Pattern Recognition,
  159. (1989). Texture description and segmentation through fractal geometry.
  160. (1980). Textured image segmentation,
  161. (1951). The accomodation reflex and its stimulus.
  162. (2006). The Clouds (Translation).
  163. (2002). The Grand Illusion. New Scientist,
  164. (1999). The photometry of under sampled point-spreadfunctions.
  165. The probability that a numerical analysis problem is difficult.
  166. (1941). The Retina. Chicago:
  167. The role of color in high-level vision.
  168. (1971). Time series models/or texture synthesis.
  169. (2003). Unsupervised Color Decomposition of Histologically Stained Tissue Samples. In
  170. (2002). Use of image-based 3D modelling techniques in broadcast applications. BBC R&D Department,
  171. (2000). Use of Weiner filtering in the measurement of the two-dimensional modulation transfer function.
  172. (2004). Users' Guide to Mathematics.
  173. (2002). Viewing-angle-enhanced integral 3-D imaging using double display devices with masks.
  174. (1978). Visual discrimination o/textures with identical third order statistics.
  175. (1993). Wavelet transform in depth recovery.
  176. (1991). Why least squares and maximum entropy; an axiomatic approach to inverse problems.

To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.