85,809 research outputs found
Methodologies and Applications of Data Coordinate Conversion for Roadside LiDAR
Light Detection and Ranging (LiDAR) is becoming more popular in applications of the transportation field, including traffic data collection, autonomous vehicles, and connected vehicles. Compared with traditional methods, LiDAR can provide high-resolution-micro-traffic data (HRMTD) for all road users without being affected by the light condition. Unlike the macro data collected by traditional sensors containing traffic flow rates, average speeds, and occupancy information, the HRMTD can provide higher accuracy and more detailed multimodal all-traffic trajectories data. But there are still some limitations when using it. The first one is that the raw data is in LiDAR’s coordinate system, which greatly affects the visibility of the data. Secondly, the detection range limits its further development. Although LiDAR can detect data within 200 m from itself, the effective detection range is 50~ 60 m. What’s more, the occlusion issue occurred from time to time.To overcome these limitations, data mapping and integration methods are needed. This research proposed the data integration and mapping method for roadside LiDAR sensors. There is a total of six main steps in this method: reference points collection, reference points matching, transformation matrix calculation, time synchronization, data integration, and data mapping. The raw LiDAR data is in the Cartesian coordinate system. In this coordinate system, the position of each LiDAR point is represented by (x,y,z). To map these points on the GIS-based software based on the WGS 1984 coordinate system, the coordinate system of the LiDAR data should be transformed. After converting the LiDAR data into Geographic Coordinate Systems, the ICP method is applied to integrate the data collected by multiple LiDAR sensors. Compared with the original LiDAR data, the longitude, latitude, and elevation information are added to the processed dataset. The new dataset can be used as the input for the HRMTD processing procedures for roadside LiDAR.
Other than benefiting the autonomous vehicle(AV) system and connected vehicle(CV) system, the HRMTD can also serve other transportation applications. This research provides an application using the HRMTD obtained from roadside LiDAR data to extract lane and crosswalk-based multimodal traffic volumes. This method has three main steps: start and endpoint selection, detection zone selection, and threshold learning. The second step is the primary step of the method, which can be divided into four sub-steps: location searching, data comparison, size searching, and best zone selection. A whole day of data collected in the real world is used to verify the method and compared with the manually counted traffic volume, and the result shows that the accuracy of this traffic volume extraction method reaches 95% or higher. This research will significantly change how traffic agencies assessing road network performance and add great traffic values to the existing probe-vehicle data and crowd-resourced data
LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System
Collision avoidance is a critical task in many applications, such as ADAS
(advanced driver-assistance systems), industrial automation and robotics. In an
industrial automation setting, certain areas should be off limits to an
automated vehicle for protection of people and high-valued assets. These areas
can be quarantined by mapping (e.g., GPS) or via beacons that delineate a
no-entry area. We propose a delineation method where the industrial vehicle
utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to
detect passive beacons and model-predictive control to stop the vehicle from
entering a restricted space. The beacons are standard orange traffic cones with
a highly reflective vertical pole attached. The LiDAR can readily detect these
beacons, but suffers from false positives due to other reflective surfaces such
as worker safety vests. Herein, we put forth a method for reducing false
positive detection from the LiDAR by projecting the beacons in the camera
imagery via a deep learning method and validating the detection using a neural
network-learned projection from the camera to the LiDAR space. Experimental
data collected at Mississippi State University's Center for Advanced Vehicular
Systems (CAVS) shows the effectiveness of the proposed system in keeping the
true detection while mitigating false positives.Comment: 34 page
Map++: A Crowd-sensing System for Automatic Map Semantics Identification
Digital maps have become a part of our daily life with a number of commercial
and free map services. These services have still a huge potential for
enhancement with rich semantic information to support a large class of mapping
applications. In this paper, we present Map++, a system that leverages standard
cell-phone sensors in a crowdsensing approach to automatically enrich digital
maps with different road semantics like tunnels, bumps, bridges, footbridges,
crosswalks, road capacity, among others. Our analysis shows that cell-phones
sensors with humans in vehicles or walking get affected by the different road
features, which can be mined to extend the features of both free and commercial
mapping services. We present the design and implementation of Map++ and
evaluate it in a large city. Our evaluation shows that we can detect the
different semantics accurately with at most 3% false positive rate and 6% false
negative rate for both vehicle and pedestrian-based features. Moreover, we show
that Map++ has a small energy footprint on the cell-phones, highlighting its
promise as a ubiquitous digital maps enriching service.Comment: Published in the Eleventh Annual IEEE International Conference on
Sensing, Communication, and Networking (IEEE SECON 2014
- …