50 research outputs found

    Milli-RIO: Ego-Motion Estimation with Low-Cost Millimetre-Wave Radar

    Full text link
    Robust indoor ego-motion estimation has attracted significant interest in the last decades due to the fast-growing demand for location-based services in indoor environments. Among various solutions, frequency-modulated continuous-wave (FMCW) radar sensors in millimeter-wave (MMWave) spectrum are gaining more prominence due to their intrinsic advantages such as penetration capability and high accuracy. Single-chip low-cost MMWave radar as an emerging technology provides an alternative and complementary solution for robust ego-motion estimation, making it feasible in resource-constrained platforms thanks to low-power consumption and easy system integration. In this paper, we introduce Milli-RIO, an MMWave radar-based solution making use of a single-chip low-cost radar and inertial measurement unit sensor to estimate six-degrees-of-freedom ego-motion of a moving radar. Detailed quantitative and qualitative evaluations prove that the proposed method achieves precisions on the order of few centimeters for indoor localization tasks.Comment: Submitted to IEEE Sensors, 9page

    Millimeter-wave backscattering measurements with transmitarrays for personal radar applications

    Get PDF
    The concept of personal radar has recently emerged as an interesting solution for next 5G applications. In fact the high portability of massive antenna arrays at millimeter-waves enables the integration of a radar system in pocket-size devices (i.e. tablets or smartphones) and enhances the possibility to map the surrounding environment by guaranteeing accurate localization together with high-speed communication capabilities. In this paper we investigate for the first time the capability of such personal radar solution using real measured data collected at millimeter-waves as input for the mapping algorithm

    Augmentation of Visual Odometry using Radar

    Get PDF
    As UAVs become viable for more applications, pose estimation continues to be critical. All UAVs need to know where they are at all times, in order to avoid disaster. However, in the event that UAVs are deployed in an area with poor visual conditions, such as in many disaster scenarios, many localization algorithms have difficulties working. This thesis presents VIL-DSO, a visual odometry method as a pose estimation solution, combining several different algorithms in order to improve pose estimation and provide metric scale. This thesis also presents a method for automatically determining an accurate physical transform between radar and camera data, and in doing so, allow for the projection of radar information into the image plane. Finally, this thesis presents EVIL-DSO, a method for localization that fuses visual-inertial odometry with radar information. The proposed EVIL-DSO algorithm uses radar information projected into the image plane in order to create a depth map for odometry to directly observe depth of features, which can then be used as part of the odometry algorithm to remove the need to perform costly depth estimations. Trajectory analysis of the proposed algorithm on outdoor data, compared to differential GPS data, shows that the proposed algorithm is more accurate in terms of root-mean-square error, as well as having a lower percentage of scale error. Runtime analysis shows that the proposed algorithm updates more frequently than other, similar, algorithms

    Radar-based localization and mapping for large-scale environments and adverse weather conditions

    Get PDF
    In mobile robotics, localization and mapping is one of the fundamental capabilities towards autonomy. Navigating autonomously in large-scale, unstructured, extreme and dynamical environments is particularly challenging due to the high variations in the scene. To deliver a robotic system that can operate 24/7 in outdoor environment, we need to design a state estimation system that is robust in all weather conditions. In this thesis, we propose, implement and validate three systems to tackle the problem of long-term localization and mapping. We focus on using radar-only platform to realize the SLAM and localization systems in probabilistic manners. We first introduce a radar-based SLAM system that can operate in city-scale environment. Second, we present an improved version of the radar-based SLAM system with enhanced odometry estimation capability and extensive experiments on extreme weather conditions, proving that our proposed radar SLAM solution is viable in all weather conditions. We also demonstrate the superiority of radar-based SLAM system compared to LiDAR and vision based system in snowy and low light conditions respectively. Finally, we show how to combine online public maps and radar sensor to achieve accurate localization even we do not have a prior sensor map. We show that our proposed localization system can generalize to different scenarios and we validate it across three datasets collected in three different continents

    Driving in the Rain: A Survey toward Visibility Estimation through Windshields

    Get PDF
    Rain can significantly impair the driver’s sight and affect his performance when driving in wet conditions. Evaluation of driver visibility in harsh weather, such as rain, has garnered considerable research since the advent of autonomous vehicles and the emergence of intelligent transportation systems. In recent years, advances in computer vision and machine learning led to a significant number of new approaches to address this challenge. However, the literature is fragmented and should be reorganised and analysed to progress in this field. There is still no comprehensive survey article that summarises driver visibility methodologies, including classic and recent data-driven/model-driven approaches on the windshield in rainy conditions, and compares their generalisation performance fairly. Most ADAS and AD systems are based on object detection. Thus, rain visibility plays a key role in the efficiency of ADAS/AD functions used in semi- or fully autonomous driving. This study fills this gap by reviewing current state-of-the-art solutions in rain visibility estimation used to reconstruct the driver’s view for object detection-based autonomous driving. These solutions are classified as rain visibility estimation systems that work on (1) the perception components of the ADAS/AD function, (2) the control and other hardware components of the ADAS/AD function, and (3) the visualisation and other software components of the ADAS/AD function. Limitations and unsolved challenges are also highlighted for further research

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online four–five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system
    corecore