1,019 research outputs found
Understanding and ameliorating non-linear phase and amplitude responses in AMCW Lidar
Amplitude modulated continuous wave (AMCW) lidar systems commonly suffer from non-linear phase and amplitude responses due to a number of known factors such as aliasing and multipath inteference. In order to produce useful range and intensity information it is necessary to remove these perturbations from the measurements. We review the known causes of non-linearity, namely aliasing, temporal variation in correlation waveform shape and mixed pixels/multipath inteference. We also introduce other sources of non-linearity, including crosstalk, modulation waveform envelope decay and non-circularly symmetric noise statistics, that have been ignored in the literature. An experimental study is conducted to evaluate techniques for mitigation of non-linearity, and it is found that harmonic cancellation provides a significant improvement in phase and amplitude linearity
Resolving Multi-path Interference in Time-of-Flight Imaging via Modulation Frequency Diversity and Sparse Regularization
Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase
shifts of amplitude-modulated signals. For broad illumination or transparent
objects, reflections from multiple scene points can illuminate a given pixel,
giving rise to an erroneous depth map. We report here a sparsity regularized
solution that separates K-interfering components using multiple modulation
frequency measurements. The method maps ToF imaging to the general framework of
spectral estimation theory and has applications in improving depth profiles and
exploiting multiple scattering.Comment: 11 Pages, 4 figures, appeared with minor changes in Optics Letter
A New Vehicle Localization Scheme Based on Combined Optical Camera Communication and Photogrammetry
The demand for autonomous vehicles is increasing gradually owing to their
enormous potential benefits. However, several challenges, such as vehicle
localization, are involved in the development of autonomous vehicles. A simple
and secure algorithm for vehicle positioning is proposed herein without
massively modifying the existing transportation infrastructure. For vehicle
localization, vehicles on the road are classified into two categories: host
vehicles (HVs) are the ones used to estimate other vehicles' positions and
forwarding vehicles (FVs) are the ones that move in front of the HVs. The FV
transmits modulated data from the tail (or back) light, and the camera of the
HV receives that signal using optical camera communication (OCC). In addition,
the streetlight (SL) data are considered to ensure the position accuracy of the
HV. Determining the HV position minimizes the relative position variation
between the HV and FV. Using photogrammetry, the distance between FV or SL and
the camera of the HV is calculated by measuring the occupied image area on the
image sensor. Comparing the change in distance between HV and SLs with the
change in distance between HV and FV, the positions of FVs are determined. The
performance of the proposed technique is analyzed, and the results indicate a
significant improvement in performance. The experimental distance measurement
validated the feasibility of the proposed scheme
Calibration Method for Texel Images Created from Fused Lidar and Digital Camera Images
The fusion of imaging lidar information and digital imagery results in 2.5-dimensional surfaces covered with texture information, called texel images. These data sets, when taken from different viewpoints, can be combined to create three-dimensional (3-D) images of buildings, vehicles, or other objects. This paper presents a procedure for calibration, error correction, and fusing of flash lidar and digital camera information from a single sensor configuration to create accurate texel images. A brief description of a prototype sensor is given, along with a calibration technique used with the sensor, which is applicable to other flash lidar/digital image sensor systems. The method combines systematic error correction of the flash lidar data, correction for lens distortion of the digital camera and flash lidar images, and fusion of the lidar to the camera data in a single process. The result is a texel image acquired directly from the sensor. Examples of the resulting images, with improvements from the proposed algorithm, are presented. Results with the prototype sensor show very good match between 3-D points and the digital image (\u3c 2.8 image pixels), with a 3-D object measurement error of \u3c 0.5%, compared to a noncalibrated error of ∼3%
Extending AMCW lidar depth-of-field using a coded aperture
By augmenting a high resolution full-field Amplitude Modulated Continuous Wave lidar system with a coded aperture, we show that depth-of-field can be extended using explicit, albeit blurred, range data to determine PSF scale. Because complex domain range-images contain explicit range information, the aperture design is unconstrained by the necessity for range determination by depth-from-defocus. The coded aperture design is shown to improve restoration quality over a circular aperture. A proof-of-concept algorithm using dynamic PSF determination and spatially variant Landweber iterations is developed and using an empirically sampled point spread function is shown to work in cases without serious multipath interference or high phase complexity
Methods for linear radial motion estimation in time-of-flight range imaging
Motion artefacts in time-of-flight range imaging are treated as a feature to measure. Methods for measuring linear radial velocity from range imaging cameras are developed and tested. With the measurement of velocity, the range to the position of the target object at the start of the data acquisition period is computed, effectively correcting the motion error. A new phase based pseudo-quadrature method designed for low speed measurement measures radial velocity up to ±1.8 m/s with RMSE 0.045 m/s and standard deviation of 0.09-0.33 m/s, and new high-speed Doppler extraction method measures radial velocity up to ±40 m/s with standard deviation better than 1 m/s and RMSE of 3.5 m/s
Time-to-digital converters and histogram builders in SPAD arrays for pulsed-LiDAR
Light Detection and Ranging (LiDAR) is a 3D imaging technique widely used in many applications such as augmented reality, automotive, machine vision, spacecraft navigation and landing. Pulsed-LiDAR is one of the most diffused LiDAR techniques which relies on the measurement of the round-trip travel time of an optical pulse back-scattered from a distant target. Besides the light source and the detector, Time-to-Digital Converters (TDCs) are fundamental components in pulsed-LiDAR systems, since they allow to measure the back-scattered photon arrival times and their performance directly impact on LiDAR system requirements (i.e., range, precision, and measurements rate). In this work, we present a review of recent TDC architectures suitable to be integrated in SPAD-based CMOS arrays and a review of data processing solutions to derive the TOF information. Furthermore, main TDC parameters and processing techniques are described and analyzed considering pulsed-LiDAR requirements
- …