15 research outputs found

    Vision-based localization methods under GPS-denied conditions

    Full text link
    This paper reviews vision-based localization methods in GPS-denied environments and classifies the mainstream methods into Relative Vision Localization (RVL) and Absolute Vision Localization (AVL). For RVL, we discuss the broad application of optical flow in feature extraction-based Visual Odometry (VO) solutions and introduce advanced optical flow estimation methods. For AVL, we review recent advances in Visual Simultaneous Localization and Mapping (VSLAM) techniques, from optimization-based methods to Extended Kalman Filter (EKF) based methods. We also introduce the application of offline map registration and lane vision detection schemes to achieve Absolute Visual Localization. This paper compares the performance and applications of mainstream methods for visual localization and provides suggestions for future studies.Comment: 32 pages, 15 figure

    Localization and Mapping for Self-Driving Vehicles:A Survey

    Get PDF
    The upsurge of autonomous vehicles in the automobile industry will lead to better driving experiences while also enabling the users to solve challenging navigation problems. Reaching such capabilities will require significant technological attention and the flawless execution of various complex tasks, one of which is ensuring robust localization and mapping. Recent surveys have not provided a meaningful and comprehensive description of the current approaches in this field. Accordingly, this review is intended to provide adequate coverage of the problems affecting autonomous vehicles in this area, by examining the most recent methods for mapping and localization as well as related feature extraction and data security problems. First, a discussion of the contemporary methods of extracting relevant features from equipped sensors and their categorization as semantic, non-semantic, and deep learning methods is presented. We conclude that representativeness, low cost, and accessibility are crucial constraints in the choice of the methods to be adopted for localization and mapping tasks. Second, the survey focuses on methods to build a vehicle’s environment map, considering both the commercial and the academic solutions available. The analysis proposes a difference between two types of environment, known and unknown, and develops solutions in each case. Third, the survey explores different approaches to vehicles’ localization and also classifies them according to their mathematical characteristics and priorities. Each section concludes by presenting the related challenges and some future directions. The article also highlights the security problems likely to be encountered in self-driving vehicles, with an assessment of possible defense mechanisms that could prevent security attacks in vehicles. Finally, the article ends with a debate on the potential impacts of autonomous driving, spanning energy consumption and emission reduction, sound and light pollution, integration into smart cities, infrastructure optimization, and software refinement. This thorough investigation aims to foster a comprehensive understanding of the diverse implications of autonomous driving across various domains

    MRS Drone: A Modular Platform for Real-World Deployment of Aerial Multi-Robot Systems

    Full text link
    This paper presents a modular autonomous Unmanned Aerial Vehicle (UAV) platform called the Multi-robot Systems (MRS) Drone that can be used in a large range of indoor and outdoor applications. The MRS Drone features unique modularity with respect to changes in actuators, frames, and sensory configuration. As the name suggests, the platform is specially tailored for deployment within a MRS group. The MRS Drone contributes to the state-of-the-art of UAV platforms by allowing smooth real-world deployment of multiple aerial robots, as well as by outperforming other platforms with its modularity. For real-world multi-robot deployment in various applications, the platform is easy to both assemble and modify. Moreover, it is accompanied by a realistic simulator to enable safe pre-flight testing and a smooth transition to complex real-world experiments. In this manuscript, we present mechanical and electrical designs, software architecture, and technical specifications to build a fully autonomous multi UAV system. Finally, we demonstrate the full capabilities and the unique modularity of the MRS Drone in various real-world applications that required a diverse range of platform configurations.Comment: 49 pages, 39 figures, accepted for publication to the Journal of Intelligent & Robotic System

    PNT cyber resilience : a Lab2Live observer based approach, Report 1 : GNSS resilience and identified vulnerabilities. Technical Report 1

    Get PDF
    The use of global navigation satellite systems (GNSS) such as GPS and Galileo are vital sources of positioning, navigation and timing (PNT) information for vehicles. This information is of critical importance for connected autonomous vehicles (CAVs) due to their dependence on this information for localisation, route planning and situational awareness. A downside to solely relying on GNSS for PNT is that the signal strength arriving from navigation satellites in space is weak and currently there is no authentication included in the civilian GNSS adopted in the automotive industry. This means that cyber-attacks against the GNSS signal via jamming or spoofing are attractive to adversaries due to the potentially high impact they can achieve. This report reviews the vulnerabilities of GNSS services for CAVs (a summary is shown in Figure 1), as well as detection and mitigating techniques, summarises the opinions on PNT cyber testing sourced from a select group of experts, and finishes with a description of the associated lab-based and real-world feasibility study and proposed research methodology

    MEMS Accelerometers

    Get PDF
    Micro-electro-mechanical system (MEMS) devices are widely used for inertia, pressure, and ultrasound sensing applications. Research on integrated MEMS technology has undergone extensive development driven by the requirements of a compact footprint, low cost, and increased functionality. Accelerometers are among the most widely used sensors implemented in MEMS technology. MEMS accelerometers are showing a growing presence in almost all industries ranging from automotive to medical. A traditional MEMS accelerometer employs a proof mass suspended to springs, which displaces in response to an external acceleration. A single proof mass can be used for one- or multi-axis sensing. A variety of transduction mechanisms have been used to detect the displacement. They include capacitive, piezoelectric, thermal, tunneling, and optical mechanisms. Capacitive accelerometers are widely used due to their DC measurement interface, thermal stability, reliability, and low cost. However, they are sensitive to electromagnetic field interferences and have poor performance for high-end applications (e.g., precise attitude control for the satellite). Over the past three decades, steady progress has been made in the area of optical accelerometers for high-performance and high-sensitivity applications but several challenges are still to be tackled by researchers and engineers to fully realize opto-mechanical accelerometers, such as chip-scale integration, scaling, low bandwidth, etc

    Inertial learning and haptics for legged robot state estimation in visually challenging environments

    Get PDF
    Legged robots have enormous potential to automate dangerous or dirty jobs because they are capable of traversing a wide range of difficult terrains such as up stairs or through mud. However, a significant challenge preventing widespread deployment of legged robots is a lack of robust state estimation, particularly in visually challenging conditions such as darkness or smoke. In this thesis, I address these challenges by exploiting proprioceptive sensing from inertial, kinematic and haptic sensors to provide more accurate state estimation when visual sensors fail. Four different methods are presented, including the use of haptic localisation, terrain semantic localisation, learned inertial odometry, and deep learning to infer the evolution of IMU biases. The first approach exploits haptics as a source of proprioceptive localisation by comparing geometric information to a prior map. The second method expands on this concept by fusing both semantic and geometric information, allowing for accurate localisation on diverse terrain. Next, I combine new techniques in inertial learning with classical IMU integration and legged robot kinematics to provide more robust state estimation. This is further developed to use only IMU data, for an application entirely different from robotics: 3D reconstruction of bone with a handheld ultrasound scanner. Finally, I present the novel idea of using deep learning to infer the evolution of IMU biases, improving state estimation in exteroceptive systems where vision fails. Legged robots have the potential to benefit society by automating dangerous, dull, or dirty jobs and by assisting first responders in emergency situations. However, there remain many unsolved challenges to the real-world deployment of legged robots, including accurate state estimation in vision-denied environments. The work presented in this thesis takes a step towards solving these challenges and enabling the deployment of legged robots in a variety of applications

    Optimising mobile laser scanning for underground mines

    Full text link
    Despite several technological advancements, underground mines are still largely relied on visual inspections or discretely placed direct-contact measurement sensors for routine monitoring. Such approaches are manual and often yield inconclusive, unreliable and unscalable results besides exposing mine personnel to field hazards. Mobile laser scanning (MLS) promises an automated approach that can generate comprehensive information by accurately capturing large-scale 3D data. Currently, the application of MLS has relatively remained limited in mining due to challenges in the post-registration of scans and the unavailability of suitable processing algorithms to provide a fully automated mapping solution. Additionally, constraints such as the absence of a spatial positioning network and the deficiency of distinguishable features in underground mining spaces pose challenges in mobile mapping. This thesis aims to address these challenges in mine inspections by optimising different aspects of MLS: (1) collection of large-scale registered point cloud scans of underground environments, (2) geological mapping of structural discontinuities, and (3) inspection of structural support features. Firstly, a spatial positioning network was designed using novel three-dimensional unique identifiers (3DUID) tags and a 3D registration workflow (3DReG), to accurately obtain georeferenced and coregistered point cloud scans, enabling multi-temporal mapping. Secondly, two fully automated methods were developed for mapping structural discontinuities from point cloud scans – clustering on local point descriptors (CLPD) and amplitude and phase decomposition (APD). These methods were tested on both surface and underground rock mass for discontinuity characterisation and kinematic analysis of the failure types. The developed algorithms significantly outperformed existing approaches, including the conventional method of compass and tape measurements. Finally, different machine learning approaches were used to automate the recognition of structural support features, i.e. roof bolts from point clouds, in a computationally efficient manner. Roof bolts being mapped from a scanned point cloud provided an insight into their installation pattern, which underpinned the applicability of laser scanning to inspect roof supports rapidly. Overall, the outcomes of this study lead to reduced human involvement in field assessments of underground mines using MLS, demonstrating its potential for routine multi-temporal monitoring

    Terrain sensing and estimation for dynamic outdoor mobile robots

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2007.Includes bibliographical references (p. 120-125).In many applications, mobile robots are required to travel on outdoor terrain at high speed. Compared to traditional low-speed, laboratory-based robots, outdoor scenarios pose increased perception and mobility challenges which must be considered to achieve high performance. Additionally, high-speed driving produces dynamic robot-terrain interactions which are normally negligible in low speed driving. This thesis presents algorithms for estimating wheel slip and detecting robot immobilization on outdoor terrain, and for estimating traversed terrain profile and classifying terrain type. Both sets of algorithms utilize common onboard sensors. Two methods are presented for robot immobilization detection. The first method utilizes a dynamic vehicle model to estimate robot velocity and explicitly estimate longitudinal wheel slip. The vehicle model utilizes a novel simplified tire traction/braking force model in addition to estimating external resistive disturbance forces acting on the robot. The dynamic model is combined with sensor measurements in an extended Kalman filter framework. A preliminary algorithm for adapting the tire model parameters is presented. The second, model-free method takes a signal recognition-based approach to analyze inertial measurements to detect robot immobilization. Both approaches are experimentally validated on a robotic platform traveling on a variety of outdoor terrains. Two detector fusion techniques are proposed and experimentally validated which combine multiple detectors to increase detection speed and accuracy. An algorithm is presented to classify outdoor terrain for high-speed mobile robots using a suspension mounted accelerometer. The algorithm utilizes a dynamic vehicle model to estimate the terrain profile and classifies the terrain based on spatial frequency components of the estimated profile. The classification algorithm is validated using experimental results collected with a commercial automobile driving in real-world conditions.by Christopher Charles Ward.S.M

    Advances in Automated Driving Systems

    Get PDF
    Electrification, automation of vehicle control, digitalization and new mobility are the mega-trends in automotive engineering, and they are strongly connected. While many demonstrations for highly automated vehicles have been made worldwide, many challenges remain in bringing automated vehicles to the market for private and commercial use. The main challenges are as follows: reliable machine perception; accepted standards for vehicle-type approval and homologation; verification and validation of the functional safety, especially at SAE level 3+ systems; legal and ethical implications; acceptance of vehicle automation by occupants and society; interaction between automated and human-controlled vehicles in mixed traffic; human–machine interaction and usability; manipulation, misuse and cyber-security; the system costs of hard- and software and development efforts. This Special Issue was prepared in the years 2021 and 2022 and includes 15 papers with original research related to recent advances in the aforementioned challenges. The topics of this Special Issue cover: Machine perception for SAE L3+ driving automation; Trajectory planning and decision-making in complex traffic situations; X-by-Wire system components; Verification and validation of SAE L3+ systems; Misuse, manipulation and cybersecurity; Human–machine interactions, driver monitoring and driver-intention recognition; Road infrastructure measures for the introduction of SAE L3+ systems; Solutions for interactions between human- and machine-controlled vehicles in mixed traffic
    corecore