1,163 research outputs found
Reflectance Intensity Assisted Automatic and Accurate Extrinsic Calibration of 3D LiDAR and Panoramic Camera Using a Printed Chessboard
This paper presents a novel method for fully automatic and convenient
extrinsic calibration of a 3D LiDAR and a panoramic camera with a normally
printed chessboard. The proposed method is based on the 3D corner estimation of
the chessboard from the sparse point cloud generated by one frame scan of the
LiDAR. To estimate the corners, we formulate a full-scale model of the
chessboard and fit it to the segmented 3D points of the chessboard. The model
is fitted by optimizing the cost function under constraints of correlation
between the reflectance intensity of laser and the color of the chessboard's
patterns. Powell's method is introduced for resolving the discontinuity problem
in optimization. The corners of the fitted model are considered as the 3D
corners of the chessboard. Once the corners of the chessboard in the 3D point
cloud are estimated, the extrinsic calibration of the two sensors is converted
to a 3D-2D matching problem. The corresponding 3D-2D points are used to
calculate the absolute pose of the two sensors with Unified Perspective-n-Point
(UPnP). Further, the calculated parameters are regarded as initial values and
are refined using the Levenberg-Marquardt method. The performance of the
proposed corner detection method from the 3D point cloud is evaluated using
simulations. The results of experiments, conducted on a Velodyne HDL-32e LiDAR
and a Ladybug3 camera under the proposed re-projection error metric,
qualitatively and quantitatively demonstrate the accuracy and stability of the
final extrinsic calibration parameters.Comment: 20 pages, submitted to the journal of Remote Sensin
Calibration and Sensitivity Analysis of a Stereo Vision-Based Driver Assistance System
Az http://intechweb.org/ alatti "Books" fĂĽl alatt kell rákeresni a "Stereo Vision" cĂmre Ă©s az 1. fejezetre
CaLib: Simple and Accurate LiDAR-RGB Calibration using Small Common Markers
In many fields of robotics, knowing the relative position and orientation
between two sensors is a mandatory precondition to operate with multiple
sensing modalities. In this context, the pair LiDAR-RGB cameras offer
complementary features: LiDARs yield sparse high quality range measurements,
while RGB cameras provide a dense color measurement of the environment.
Existing techniques often rely either on complex calibration targets that are
expensive to obtain, or extracted virtual correspondences that can hinder the
estimate's accuracy. In this paper we address the problem of LiDAR-RGB
calibration using typical calibration patterns (i.e. A3 chessboard) with
minimal human intervention. Our approach exploits the planarity of the target
to find correspondences between the sensors measurements, leading to features
that are robust to LiDAR noise.
Moreover, we estimate a solution by solving a joint non-linear optimization
problem. We validated our approach by carrying on quantitative and comparative
experiments with other state-of-the-art approaches. Our results show that our
simple schema performs on par or better than other approches using complex
calibration targets. Finally, we release an open-source C++ implementation at
\url{https://github.com/srrg-sapienza/ca2lib}Comment: 7 pages, 10 figure
Assessment of registration methods for thermal infrared and visible images for diabetic foot monitoring
This work presents a revision of four different registration methods for thermal infrared and visible images captured by a camera-based prototype for the remote monitoring of diabetic foot. This prototype uses low cost and off-the-shelf available sensors in thermal infrared and visible spectra. Four different methods (Geometric Optical Translation, Homography, Iterative Closest Point, and Affine transform with Gradient Descent) have been implemented and analyzed for the registration of images obtained from both sensors. All four algorithms´ performances were evaluated using the Simultaneous Truth and Performance Level Estimation (STAPLE) together with several overlap benchmarks as the Dice coefficient and the Jaccard index. The performance of the four methods has been analyzed with the subject at a fixed focal plane and also in the vicinity of this plane. The four registration algorithms provide suitable results both at the focal plane as well as outside of it within 50 mm margin. The obtained Dice coefficients are greater than 0.950 in all scenarios, well within the margins required for the application at hand. A discussion of the obtained results under different distances is presented along with an evaluation of its robustness under changing conditions.This research was funded by the IACTEC Technological Training program, grant number TF INNOVA 2016–2021
Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera
Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera
Individual differences in face-looking behavior generalize from the lab to the world
Recent laboratory studies have found large, stable individual differences in the location people first fixate when identifying faces, ranging from the brows to the mouth. Importantly, this variation is strongly associated with differences in fixation-specific identification performance such that individuals' recognition ability is maximized when looking at their preferred location (Mehoudar, Arizpe, Baker, & Yovel, 2014; Peterson & Eckstein, 2013). This finding suggests that face representations are retinotopic and individuals enact gaze strategies that optimize identification, yet the extent to which this behavior reflects real-world gaze behavior is unknown. Here, we used mobile eye trackers to test whether individual differences in face gaze generalize from lab to real-world vision. In-lab fixations were measured with a speeded face identification task, while real-world behavior was measured as subjects freely walked around the Massachusetts Institute of Technology campus. We found a strong correlation between the patterns of individual differences in face gaze in the lab and real-world settings. Our findings support the hypothesis that individuals optimize realworld face identification by consistently fixating the same location and thus strongly constraining the space of retinotopic input. The methods developed for this study entailed collecting a large set of high-definition, wide field-of-view natural videos from head-mounted cameras and the viewer's fixation position, allowing us to characterize subjects' actually experienced real-world retinotopic images. These images enable us to ask how vision is optimized not just for the statistics of the ''natural images'' found in web databases, but of the truly natural, retinotopic images that have landed on actual human retinae during real-world experience
- …