2,776 research outputs found
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.Comment: 34 page
A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles
This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed
Survey on LiDAR Perception in Adverse Weather Conditions
Autonomous vehicles rely on a variety of sensors to gather information about
their surrounding. The vehicle's behavior is planned based on the environment
perception, making its reliability crucial for safety reasons. The active LiDAR
sensor is able to create an accurate 3D representation of a scene, making it a
valuable addition for environment perception for autonomous vehicles. Due to
light scattering and occlusion, the LiDAR's performance change under adverse
weather conditions like fog, snow or rain. This limitation recently fostered a
large body of research on approaches to alleviate the decrease in perception
performance. In this survey, we gathered, analyzed, and discussed different
aspects on dealing with adverse weather conditions in LiDAR-based environment
perception. We address topics such as the availability of appropriate data, raw
point cloud processing and denoising, robust perception algorithms and sensor
fusion to mitigate adverse weather induced shortcomings. We furthermore
identify the most pressing gaps in the current literature and pinpoint
promising research directions.Comment: published at IEEE IV 202
Towards 4D Virtual City Reconstruction From Lidar Point Cloud Sequences
In this paper we propose a joint approach on virtual city reconstruction and dynamic scene analysis based on point cloud sequences of
a single car-mounted Rotating Multi-Beam (RMB) Lidar sensor. The aim of the addressed work is to create 4D spatio-temporal models
of large dynamic urban scenes containing various moving and static objects. Standalone RMB Lidar devices have been frequently
applied in robot navigation tasks and proved to be efficient in moving object detection and recognition. However, they have not been
widely exploited yet for geometric approximation of ground surfaces and building facades due to the sparseness and inhomogeneous
density of the individual point cloud scans. In our approach we propose an automatic registration method of the consecutive scans
without any additional sensor information such as IMU, and introduce a process for simultaneously extracting reconstructed surfaces,
motion information and objects from the registered dense point cloud completed with point time stamp information
CARLA-Loc: Synthetic SLAM Dataset with Full-stack Sensor Setup in Challenging Weather and Dynamic Environments
The robustness of SLAM algorithms in challenging environmental conditions is
crucial for autonomous driving, but the impact of these conditions are unknown
while given the difficulty of arbitrarily changing the relevant environmental
parameters of the same environment in the real world. Therefore, we propose
CARLA-Loc, a synthetic dataset of challenging and dynamic environments built on
CARLA simulator. We integrate multiple sensors into the dataset with strict
calibration, synchronization and precise timestamping. 7 maps and 42 sequences
are posed in our dataset with different dynamic levels and weather conditions.
Objects in both stereo images and point clouds are well-segmented with their
class labels. We evaluate 5 visual-based and 4 LiDAR-based approaches on varies
sequences and analyze the effect of challenging environmental factors on the
localization accuracy, showing the applicability of proposed dataset for
validating SLAM algorithms
Acoustic Simultaneous Localization And Mapping (SLAM)
Indiana University-Purdue University Indianapolis (IUPUI)The current technologies employed for autonomous driving provide tremendous performance and results, but the technology itself is far from mature and relatively expensive. Some of the most commonly used components for autonomous driving include LiDAR, cameras, radar, and ultrasonic sensors. Sensors like such are usually high-priced and often require a tremendous amount of computational power in order to process the gathered data. Many car manufacturers consider cameras to be a low-cost alternative to some other costly sensors, but camera based sensors alone are prone to fatal perception errors. In many cases, adverse weather and night-time conditions hinder the performance of some vision based sensors. In order for a sensor to be a reliable source of data, the difference between actual data values and measured or perceived values should be as low as possible. Lowering the number of sensors used provides more economic freedom to invest in the reliability of the components used. This thesis provides an alternative approach to the current autonomous driving methodologies by utilizing acoustic signatures of moving objects. This approach makes use of a microphone array to collect and process acoustic signatures captured for simultaneous localization and mapping (SLAM). Rather than using numerous sensors to gather information about the surroundings that are beyond the reach of the user, this method investigates the benefits of considering the sound waves of different objects around the host vehicle for SLAM. The components used in this model are cost-efficient and generate data that is easy to process without requiring high processing power. The results prove that there are benefits in pursuing this approach in terms of cost efficiency and low computational power. The functionality of the model is demonstrated using MATLAB for data collection and testing
- …