381,060 research outputs found
Separating true range measurements from multi-path and scattering interference in commercial range cameras
Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene
Resolving depth measurement ambiguity with commercially available range imaging cameras
Time-of-flight range imaging is typically performed with the amplitude modulated continuous wave method. This involves illuminating a scene with amplitude modulated light. Reflected light from the scene is received by the sensor with the range to the scene encoded as a phase delay of the modulation envelope. Due to the cyclic nature of phase, an ambiguity in the measured range occurs every half wavelength in distance, thereby limiting the maximum useable range of the camera.
This paper proposes a procedure to resolve depth ambiguity using software post processing. First, the range data is processed to segment the scene into separate objects. The average intensity of each object can then be used to determine which pixels are beyond the non-ambiguous range. The results demonstrate that depth ambiguity can be resolved for various scenes using only the available depth and intensity information. This proposed method reduces the sensitivity to objects with very high and very low reflectance, normally a key problem with basic threshold approaches.
This approach is very flexible as it can be used with any range imaging camera. Furthermore, capture time is not extended, keeping the artifacts caused by moving objects at a minimum. This makes it suitable for applications such as robot vision where the camera may be moving during captures.
The key limitation of the method is its inability to distinguish between two overlapping objects that are separated by a distance of exactly one non-ambiguous range. Overall the reliability of this method is higher than the basic threshold approach, but not as high as the multiple frequency method of resolving ambiguity
Analysis of ICP variants for the registration of partially overlapping time-of-flight range images
The iterative closest point (ICP) algorithm is one of the most commonly used methods for registering partially overlapping range images. Nevertheless, this algorithm was not originally designed for this task, and many variants have been proposed in an effort to improve its prociency. The relatively new full-field amplitude-modulated time-of-flight range imaging cameras present further complications to registration in the form of measurement errors due to mixed and scattered light. This paper investigates the effectiveness of the most common ICP variants applied to range image data acquired from full-field range imaging cameras. The original ICP algorithm combined with boundary rejection performed the same as or better than the majority of variants tested. In fact, many of these variants proved to decrease the registration alignment
Performance of ePix10K, a high dynamic range, gain auto-ranging pixel detector for FELs
ePix10K is a hybrid pixel detector developed at SLAC for demanding
free-electron laser (FEL) applications, providing an ultrahigh dynamic range
(245 eV to 88 MeV) through gain auto-ranging. It has three gain modes (high,
medium and low) and two auto-ranging modes (high-to-low and medium-to-low). The
first ePix10K cameras are built around modules consisting of a sensor flip-chip
bonded to 4 ASICs, resulting in 352x384 pixels of 100 m x 100 m each.
We present results from extensive testing of three ePix10K cameras with FEL
beams at LCLS, resulting in a measured noise floor of 245 eV rms, or 67 e
equivalent noise charge (ENC), and a range of 11000 photons at 8 keV. We
demonstrate the linearity of the response in various gain combinations: fixed
high, fixed medium, fixed low, auto-ranging high to low, and auto-ranging
medium-to-low, while maintaining a low noise (well within the counting
statistics), a very low cross-talk, perfect saturation response at fluxes up to
900 times the maximum range, and acquisition rates of up to 480 Hz. Finally, we
present examples of high dynamic range x-ray imaging spanning more than 4
orders of magnitude dynamic range (from a single photon to 11000
photons/pixel/pulse at 8 keV). Achieving this high performance with only one
auto-ranging switch leads to relatively simple calibration and reconstruction
procedures. The low noise levels allow usage with long integration times at
non-FEL sources. ePix10K cameras leverage the advantages of hybrid pixel
detectors with high production yield and good availability, minimize
development complexity through sharing the hardware, software and DAQ
development with all other versions of ePix cameras, while providing an upgrade
path to 5 kHz, 25 kHz and 100 kHz in three steps over the next few years,
matching the LCLS-II requirements.Comment: 9 pages, 5 figure
Resolving Scale Ambiguity Via XSlit Aspect Ratio Analysis
In perspective cameras, images of a frontal-parallel 3D object preserve its
aspect ratio invariant to its depth. Such an invariance is useful in
photography but is unique to perspective projection. In this paper, we show
that alternative non-perspective cameras such as the crossed-slit or XSlit
cameras exhibit a different depth-dependent aspect ratio (DDAR) property that
can be used to 3D recovery. We first conduct a comprehensive analysis to
characterize DDAR, infer object depth from its AR, and model recoverable depth
range, sensitivity, and error. We show that repeated shape patterns in real
Manhattan World scenes can be used for 3D reconstruction using a single XSlit
image. We also extend our analysis to model slopes of lines. Specifically,
parallel 3D lines exhibit depth-dependent slopes (DDS) on their images which
can also be used to infer their depths. We validate our analyses using real
XSlit cameras, XSlit panoramas, and catadioptric mirrors. Experiments show that
DDAR and DDS provide important depth cues and enable effective single-image
scene reconstruction
Advantages of 3D time-of-flight range imaging cameras in machine vision applications
Machine vision using image processing of traditional intensity images is in wide spread use. In many situations environmental conditions or object colours or shades cannot be controlled, leading to difficulties in correctly processing the images and requiring complicated processing algorithms. Many of these complications can be avoided by using range image data, instead of intensity data. This is because range image data represents the physical properties of object location and shape, practically independently of object colour or shading. The advantages of range image processing are presented, along with three example applications that show how robust machine vision results can be obtained with relatively simple range image processing in real-time applications
- …
