678 research outputs found
Visual and light detection and ranging-based simultaneous localization and mapping for self-driving cars
In recent years, there has been a strong demand for self-driving cars. For safe navigation, self-driving cars need both precise localization and robust mapping. While global navigation satellite system (GNSS) can be used to locate vehicles, it has some limitations, such as satellite signal absence (tunnels and caves), which restrict its use in urban scenarios. Simultaneous localization and mapping (SLAM) are an excellent solution for identifying a vehicle’s position while at the same time constructing a representation of the environment. SLAM-based visual and light detection and ranging (LIDAR) refer to using cameras and LIDAR as source of external information. This paper presents an implementation of SLAM algorithm for building a map of environment and obtaining car’s trajectory using LIDAR scans. A detailed overview of current visual and LIDAR SLAM approaches has also been provided and discussed. Simulation results referred to LIDAR scans indicate that SLAM is convenient and helpful in localization and mapping
BEV-Locator: An End-to-end Visual Semantic Localization Network Using Multi-View Images
Accurate localization ability is fundamental in autonomous driving.
Traditional visual localization frameworks approach the semantic map-matching
problem with geometric models, which rely on complex parameter tuning and thus
hinder large-scale deployment. In this paper, we propose BEV-Locator: an
end-to-end visual semantic localization neural network using multi-view camera
images. Specifically, a visual BEV (Birds-Eye-View) encoder extracts and
flattens the multi-view images into BEV space. While the semantic map features
are structurally embedded as map queries sequence. Then a cross-model
transformer associates the BEV features and semantic map queries. The
localization information of ego-car is recursively queried out by
cross-attention modules. Finally, the ego pose can be inferred by decoding the
transformer outputs. We evaluate the proposed method in large-scale nuScenes
and Qcraft datasets. The experimental results show that the BEV-locator is
capable to estimate the vehicle poses under versatile scenarios, which
effectively associates the cross-model information from multi-view images and
global semantic maps. The experiments report satisfactory accuracy with mean
absolute errors of 0.052m, 0.135m and 0.251 in lateral, longitudinal
translation and heading angle degree
Design, Field Evaluation, and Traffic Analysis of a Competitive Autonomous Driving Model in a Congested Environment
Recently, numerous studies have investigated cooperative traffic systems
using the communication among vehicle-to-everything (V2X). Unfortunately, when
multiple autonomous vehicles are deployed while exposed to communication
failure, there might be a conflict of ideal conditions between various
autonomous vehicles leading to adversarial situation on the roads. In South
Korea, virtual and real-world urban autonomous multi-vehicle races were held in
March and November of 2021, respectively. During the competition, multiple
vehicles were involved simultaneously, which required maneuvers such as
overtaking low-speed vehicles, negotiating intersections, and obeying traffic
laws. In this study, we introduce a fully autonomous driving software stack to
deploy a competitive driving model, which enabled us to win the urban
autonomous multi-vehicle races. We evaluate module-based systems such as
navigation, perception, and planning in real and virtual environments.
Additionally, an analysis of traffic is performed after collecting multiple
vehicle position data over communication to gain additional insight into a
multi-agent autonomous driving scenario. Finally, we propose a method for
analyzing traffic in order to compare the spatial distribution of multiple
autonomous vehicles. We study the similarity distribution between each team's
driving log data to determine the impact of competitive autonomous driving on
the traffic environment
Long-Term Urban Vehicle Localization Using Pole Landmarks Extracted from 3-D Lidar Scans
Due to their ubiquity and long-term stability, pole-like objects are well
suited to serve as landmarks for vehicle localization in urban environments. In
this work, we present a complete mapping and long-term localization system
based on pole landmarks extracted from 3-D lidar data. Our approach features a
novel pole detector, a mapping module, and an online localization module, each
of which are described in detail, and for which we provide an open-source
implementation at www.github.com/acschaefer/polex. In extensive experiments, we
demonstrate that our method improves on the state of the art with respect to
long-term reliability and accuracy: First, we prove reliability by tasking the
system with localizing a mobile robot over the course of 15~months in an urban
area based on an initial map, confronting it with constantly varying routes,
differing weather conditions, seasonal changes, and construction sites. Second,
we show that the proposed approach clearly outperforms a recently published
method in terms of accuracy.Comment: 9 page
Pedestrian Detection using Triple Laser Range Finders
Pedestrian detection is one of the important features in autonomous ground vehicle (AGV). It ensures the capability for safety navigation in urban environment. Therefore, the detection accuracy became a crucial part which leads to implementation using Laser Range Finder (LRF) for better data representation. In this study, an improved laser configuration and fusion technique is introduced by implementation of triple LRFs in two layers with Pedestrian Data Analysis (PDA) to recognize multiple pedestrians. The PDA integrates various features from feature extraction process for all clusters and fusion of multiple layers for better recognition. The experiments were conducted in various occlusion scenarios such as intersection, closed-pedestrian and combine scenarios. The analysis of the laser fusion and PDA for all scenarios showed an improvement of detection where the pedestrians were represented by various detection categories which solve occlusion issues when low numberof laser data were obtained
Autonomous personal vehicle for the first- and last-mile transportation services
This paper describes an autonomous vehicle testbed that aims at providing the first- and last- mile transportation services. The vehicle mainly operates in a crowded urban environment whose features can be extracted a priori. To ensure that the system is economically feasible, we take a minimalistic approach and exploit prior knowledge of the environment and the availability of the existing infrastructure such as cellular networks and traffic cameras. We present three main components of the system: pedestrian detection, localization (even in the presence of tall buildings) and navigation. The performance of each component is evaluated. Finally, we describe the role of the existing infrastructural sensors and show the improved performance of the system when they are utilized
Environment perception based on LIDAR sensors for real road applications
The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The recent developments in applications that have been designed to increase road safety require reliable and trustworthy sensors. Keeping this in mind, the most up-to-date research in the field of automotive technologies has shown that LIDARs are a very reliable sensor family. In this paper, a new approach to road obstacle classification is proposed and tested. Two different LIDAR sensors are compared by focusing on their main characteristics with respect to road applications. The viability of these sensors in real applications has been tested, where the results of this analysis are presented.The work reported in this paper has been partly funded by
the Spanish Ministry of Science and Innovation (TRA2007-
67786-C02-01, TRA2007-67786-C02-02, and TRA2009-
07505) and the CAM project SEGVAUTO-II.Publicad
- …