1,922 research outputs found
Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis
The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
Episodic Non-Markov Localization: Reasoning About Short-Term and Long-Term Features
Markov localization and its variants are widely used for localization of mobile robots. These methods assume Markov independence of observations, implying that observations made by a robot correspond to a static map. However, in real human environments, observations include occlusions due to unmapped objects like chairs and tables, and dynamic objects like humans. We introduce an episodic non-Markov localization algorithm that maintains estimates of the belief over the trajectory of the robot while explicitly reasoning about observations and their correlations arising from unmapped static objects, moving objects, as well as objects from the static map. Observations are classified as arising from longterm features, short-term features, or dynamic features, which correspond to mapped objects, unmapped static objects, and unmapped dynamic objects respectively. By detecting time steps along the robot’s trajectory where unmapped observations prior to such time steps are unrelated to those afterwards, nonMarkov localization limits the history of observations and pose estimates to “episodes” over which the belief is computed. We demonstrate non-Markov localization in challenging real world indoor and outdoor environments over multiple datasets, comparing it with alternative state-of-the-art approaches, showing it to be robust as well as accurate
Feasibility of LoRa for Smart Home Indoor Localization
With the advancement of low-power and low-cost wireless technologies in the past few years, the Internet of Things (IoT) has been growing rapidly in numerous areas of Industry 4.0 and smart homes. With the development of many applications for the IoT, indoor localization, i.e., the capability to determine the physical location of people or devices, has become an important component of smart homes. Various wireless technologies have been used for indoor localization includingWiFi, ultra-wideband (UWB), Bluetooth low energy (BLE), radio-frequency identification (RFID), and LoRa. The ability of low-cost long range (LoRa) radios for low-power and long-range communication has made this radio technology a suitable candidate for many indoor and outdoor IoT applications. Additionally, research studies have shown the feasibility of localization with LoRa radios. However, indoor localization with LoRa is not adequately explored at the home level, where the localization area is relatively smaller than offices and corporate buildings. In this study, we first explore the feasibility of ranging with LoRa. Then, we conduct experiments to demonstrate the capability of LoRa for accurate and precise indoor localization in a typical apartment setting. Our experimental results show that LoRa-based indoor localization has an accuracy better than 1.6 m in line-of-sight scenario and 3.2 m in extreme non-line-of-sight scenario with a precision better than 25 cm in all cases, without using any data filtering on the location estimates
WiFi Fingerprinting Localization for Intelligent Vehicles in Car Park
International audienceIn this paper, a novel method of WiFi fingerprinting for localizing intelligent vehicles in GPS-denied area, such as car parks, is proposed. Although the method itself is a popular approach for indoor localization application, adapting it to the speed of vehicles requires different treatment. By deploying an ensemble neural network for fingerprinting classification, the method shows a reasonable localization precision at car park speed. Furthermore, a Gaussian Mixture Model (GMM) Particle Filter is applied to increase localization frequency as well as accuracy. Experiments show promising results with average localization error of 0.6m
Cost-effective robot for steep slope crops monitoring
This project aims to develop a low cost, simple and robust robot able to autonomously monitorcrops using simple sensors. It will be required do develop robotic sub-systems and integrate them with pre-selected mechanical components, electrical interfaces and robot systems (localization, navigation and perception) using ROS, for wine making regions and maize fields
The Cityscapes Dataset for Semantic Urban Scene Understanding
Visual understanding of complex urban street scenes is an enabling factor for
a wide range of applications. Object detection has benefited enormously from
large-scale datasets, especially in the context of deep learning. For semantic
urban scene understanding, however, no current dataset adequately captures the
complexity of real-world urban scenes.
To address this, we introduce Cityscapes, a benchmark suite and large-scale
dataset to train and test approaches for pixel-level and instance-level
semantic labeling. Cityscapes is comprised of a large, diverse set of stereo
video sequences recorded in streets from 50 different cities. 5000 of these
images have high quality pixel-level annotations; 20000 additional images have
coarse annotations to enable methods that leverage large volumes of
weakly-labeled data. Crucially, our effort exceeds previous attempts in terms
of dataset size, annotation richness, scene variability, and complexity. Our
accompanying empirical study provides an in-depth analysis of the dataset
characteristics, as well as a performance evaluation of several
state-of-the-art approaches based on our benchmark.Comment: Includes supplemental materia
- …