12 research outputs found
Real-Time fusion of visual images and laser data images for safe navigation in outdoor environments
[EN]In recent years, two dimensional laser range finders mounted on vehicles is becoming a
fruitful solution to achieve safety and environment recognition requirements (Keicher &
Seufert, 2000), (Stentz et al., 2002), (DARPA, 2007). They provide real-time accurate range
measurements in large angular fields at a fixed height above the ground plane, and enable
robots and vehicles to perform more confidently a variety of tasks by fusing images from
visual cameras with range data (Baltzakis et al., 2003). Lasers have normally been used in
industrial surveillance applications to detect unexpected objects and persons in indoor
environments. In the last decade, laser range finder are moving from indoor to outdoor rural
and urban applications for 3D imaging (Yokota et al., 2004), vehicle guidance (Barawid et
al., 2007), autonomous navigation (Garcia-PĂ©rez et al., 2008), and objects recognition and
classification (Lee & Ehsani, 2008), (Edan & Kondo, 2009), (Katz et al., 2010). Unlike
industrial applications, which deal with simple, repetitive and well-defined objects, cameralaser
systems on board off-road vehicles require advanced real-time techniques and
algorithms to deal with dynamic unexpected objects. Natural environments are complex
and loosely structured with great differences among consecutive scenes and scenarios.
Vision systems still present severe drawbacks, caused by lighting variability that depends
on unpredictable weather conditions. Camera-laser objects feature fusion and classification
is still a challenge within the paradigm of artificial perception and mobile robotics in
outdoor environments with the presence of dust, dirty, rain, and extreme temperature and
humidity. Real time relevant objects perception, task driven, is a main issue for subsequent
actions decision in safe unmanned navigation. In comparison with industrial automation
systems, the precision required in objects location is usually low, as it is the speed of most
rural vehicles that operate in bounded and low structured outdoor environments.
To this aim, current work is focused on the development of algorithms and strategies for
fusing 2D laser data and visual images, to accomplish real-time detection and classification
of unexpected objects close to the vehicle, to guarantee safe navigation. Next, class
information can be integrated within the global navigation architecture, in control modules,
such as, stop, obstacle avoidance, tracking or mapping.Section 2 includes a description of the commercial vehicle, robot-tractor DEDALO and the
vision systems on board. Section 3 addresses some drawbacks in outdoor perception.
Section 4 analyses the proposed laser data and visual images fusion method, focused in the
reduction of the visual image area to the region of interest wherein objects are detected by
the laser. Two methods of segmentation are described in Section 5, to extract the shorter area
of the visual image (ROI) resulting from the fusion process. Section 6 displays the colour
based classification results of the largest segmented object in the region of interest. Some
conclusions are outlined in Section 7, and acknowledgements and references are displayed
in Section 8 and Section 9.projects: CICYT- DPI-2006-14497 by the Science
and Innovation Ministry, ROBOCITY2030 I y II: Service Robots-PRICIT-CAM-P-DPI-000176-
0505, and SEGVAUTO: Vehicle Safety-PRICIT-CAM-S2009-DPI-1509 by Madrid State
Government.Peer reviewe
Registration and Recognition in 3D
The simplest Computer Vision algorithm can tell you what color it sees when you point it at an object, but asking that computer what it is looking at is a much harder problem. Camera and LiDAR (Light Detection And Ranging) sensors generally provide streams pixel of values and sophisticated algorithms must be engineered to recognize objects or the environment. There has been significant effort expended by the computer vision community on recognizing objects in color images; however, LiDAR sensors, which sense depth values for pixels instead of color, have been studied less. Recently we have seen a renewed interest in depth data with the democratization provided by consumer depth cameras. Detecting objects in depth data is more challenging in some ways because of the lack of texture and increased complexity of processing unordered point sets. We present three systems that contribute to solving the object recognition problem from the LiDAR perspective. They are: calibration, registration, and object recognition systems. We propose a novel calibration system that works with both line and raster based LiDAR sensors, and calibrates them with respect to image cameras. Our system can be extended to calibrate LiDAR sensors that do not give intensity information. We demonstrate a novel system that produces registrations between different LiDAR scans by transforming the input point cloud into a Constellation Extended Gaussian Image (CEGI) and then uses this CEGI to estimate the rotational alignment of the scans independently. Finally we present a method for object recognition which uses local (Spin Images) and global (CEGI) information to recognize cars in a large urban dataset. We present real world results from these three systems. Compelling experiments show that object recognition systems can gain much information using only 3D geometry. There are many object recognition and navigation algorithms that work on images; the work we propose in this thesis is more complimentary to those image based methods than competitive. This is an important step along the way to more intelligent robots
Automatic alignment of a camera with a line scan lidar system
Abstract — We propose a new method for extrinsic calibration of a line-scan LIDAR with a perspective projection camera. Our method is a closed-form, minimal solution to the problem. The solution is a symbolic template found via variable elimination and the multi-polynomial Macaulay resultant. It does not require initialization, and can be used in an automatic calibration setting when paired with RANSAC and least-squares refinement. We show the efficacy of our approach through a set of simulations and a real calibration. I