991 research outputs found

    Localization Algorithms for GNSS-denied and Challenging Environments

    Get PDF
    In this dissertation, the problem about localization in GNSS-denied and challenging environments is addressed. Specifically, the challenging environments discussed in this dissertation include two different types, environments including only low-resolution features and environments containing moving objects. To achieve accurate pose estimates, the errors are always bounded through matching observations from sensors with surrounding environments. These challenging environments, unfortunately, would bring troubles into matching related methods, such as fingerprint matching, and ICP. For instance, in environments with low-resolution features, the on-board sensor measurements could match to multiple positions on a map, which creates ambiguity; in environments with moving objects included, the accuracy of the estimated localization is affected by the moving objects when performing matching. In this dissertation, two sensor fusion based strategies are proposed to solve localization problems with respect to these two types of challenging environments, respectively. For environments with only low-resolution features, such as flying over sea or desert, a multi-agent localization algorithm using pairwise communication with ranging and magnetic anomaly measurements is proposed in this dissertation. A scalable framework is then presented to extend the multi-agent localization algorithm to be suitable for a large group of agents (e.g., 128 agents) through applying CI algorithm. The simulation results show that the proposed algorithm is able to deal with large group sizes, achieve 10 meters level localization performance with 180 km traveling distance, while under restrictive communication constraints. For environments including moving objects, lidar-inertial-based solutions are proposed and tested in this dissertation. Inspired by the CI algorithm presented above, a potential solution using multiple features motions estimate and tracking is analyzed. In order to improve the performance and effectiveness of the potential solution, a lidar-inertial based SLAM algorithm is then proposed. In this method, an efficient tightly-coupled iterated Kalman filter with a build-in dynamic object filter is designed as the front-end of the SLAM algorithm, and the factor graph strategy using a scan context technology as the loop closure detection is utilized as the back-end. The performance of the proposed lidar-inertial based SLAM algorithm is evaluated with several data sets collected in environments including moving objects, and compared with the state-of-the-art lidar-inertial based SLAM algorithms

    Localization from semantic observations via the matrix permanent

    Get PDF
    Most approaches to robot localization rely on low-level geometric features such as points, lines, and planes. In this paper, we use object recognition to obtain semantic information from the robot’s sensors and consider the task of localizing the robot within a prior map of landmarks, which are annotated with semantic labels. As object recognition algorithms miss detections and produce false alarms, correct data association between the detections and the landmarks on the map is central to the semantic localization problem. Instead of the traditional vector-based representation, we propose a sensor model, which encodes the semantic observations via random finite sets and enables a unified treatment of missed detections, false alarms, and data association. Our second contribution is to reduce the problem of computing the likelihood of a set-valued observation to the problem of computing a matrix permanent. It is this crucial transformation that allows us to solve the semantic localization problem with a polynomial-time approximation to the set-based Bayes filter. Finally, we address the active semantic localization problem, in which the observer’s trajectory is planned in order to improve the accuracy and efficiency of the localization process. The performance of our approach is demonstrated in simulation and in real environments using deformable-part-model-based object detectors. Robust global localization from semantic observations is demonstrated for a mobile robot, for the Project Tango phone, and on the KITTI visual odometry dataset. Comparisons are made with the traditional lidar-based geometric Monte Carlo localization

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Implementation of the autonomous functionalities on an electric vehicle platform for research and education

    Get PDF
    Self-driving cars have recently captured the attention of researchers and car manufacturing markets. Depending upon the level of autonomy, the cars are made capable of traversing from one point to another autonomously. In order to achieve this, sophisticated sensors need to be utilized. A complex set of algorithms is required to use the sensors data in order to navigate the vehicle along the desired trajectory. Polaris is an electric vehicle platform provided for research and education purposes at Aalto University. The primary focus of the thesis was to utilize all the sensors provided in Polaris to their full potential. So that, essential data from each sensor is made available to be further utilized either by a specific automation algorithm or by some mapping routine. For any autonomous robotic system, the first step towards automation is localization. That is to determine the current position of the robot in a given environment. Different sensors mounted over the platform provide such measurements in different frames of reference. The thesis utilizes the GPS based localization solution combined with the LiDAR data and wheel odometry to perform autonomous tasks. Robot Operating System is used as the software development tool in thesis work. Autonomous tasks include the determination of the global as well as the local trajectories. The endpoints of the global trajectories are dictated by the set of predefined GPS waypoints. This is called target-point navigation. A path needs to be planned that avoids all the obstacles. Based on the planned path, a set of velocity commands are issued by the embedded controller. The velocity commands are then fed to the actuators to move the vehicle along the planned trajectory

    Evaluating indoor positioning systems in a shopping mall : the lessons learned from the IPIN 2018 competition

    Get PDF
    The Indoor Positioning and Indoor Navigation (IPIN) conference holds an annual competition in which indoor localization systems from different research groups worldwide are evaluated empirically. The objective of this competition is to establish a systematic evaluation methodology with rigorous metrics both for real-time (on-site) and post-processing (off-site) situations, in a realistic environment unfamiliar to the prototype developers. For the IPIN 2018 conference, this competition was held on September 22nd, 2018, in Atlantis, a large shopping mall in Nantes (France). Four competition tracks (two on-site and two off-site) were designed. They consisted of several 1 km routes traversing several floors of the mall. Along these paths, 180 points were topographically surveyed with a 10 cm accuracy, to serve as ground truth landmarks, combining theodolite measurements, differential global navigation satellite system (GNSS) and 3D scanner systems. 34 teams effectively competed. The accuracy score corresponds to the third quartile (75th percentile) of an error metric that combines the horizontal positioning error and the floor detection. The best results for the on-site tracks showed an accuracy score of 11.70 m (Track 1) and 5.50 m (Track 2), while the best results for the off-site tracks showed an accuracy score of 0.90 m (Track 3) and 1.30 m (Track 4). These results showed that it is possible to obtain high accuracy indoor positioning solutions in large, realistic environments using wearable light-weight sensors without deploying any beacon. This paper describes the organization work of the tracks, analyzes the methodology used to quantify the results, reviews the lessons learned from the competition and discusses its future

    Environment Search Planning Subject to High Robot Localization Uncertainty

    Get PDF
    As robots find applications in more complex roles, ranging from search and rescue to healthcare and services, they must be robust to greater levels of localization uncertainty and uncertainty about their environments. Without consideration for such uncertainties, robots will not be able to compensate accordingly, potentially leading to mission failure or injury to bystanders. This work addresses the task of searching a 2D area while reducing localization uncertainty. Wherein, the environment provides low uncertainty pose updates from beacons with a short range, covering only part of the environment. Otherwise the robot localizes using dead reckoning, relying on wheel encoder and yaw rate information from a gyroscope. As such, outside of the regions with position updates, there will be unconstrained localization error growth over time. The work contributes a Belief Markov Decision Process formulation for solving the search problem and evaluates the performance using Partially Observable Monte Carlo Planning (POMCP). Additionally, the work contributes an approximate Markov Decision Process formulation and reduced complexity state representation. The approximate problem is evaluated using value iteration. To provide a baseline, the Google OR-Tools package is used to solve the travelling salesman problem (TSP). Results are verified by simulating a differential drive robot in the Gazebo simulation environment. POMCP results indicate planning can be tuned to prioritize constraining uncertainty at the cost of increasing path length. The MDP formulation provides consistently lower uncertainty with minimal increases in path length over the TSP solution. Both formulations show improved coverage outcomes

    Implementing and Tuning an Autonomous Racing Car Testbed

    Get PDF
    Achieving safe autonomous driving is far from a vision at present days, with many examples like Uber, Google and the most famous of all Tesla, as they successfully deployed self driving cars around the world. Researchers and engineers have been putting tremendous efforts and will continue to do so in the following years into developing safe and precise control algorithms and technologies that will be included in future self driving cars. Besides these well known autonomous car deployments, some focus has also been put into autonomous racing competitions, for example the Roborace. The fact is that although significant progress that has been made, testing on real size cars in real environments requires immense financial support, making it impossible for many research groups to enter the game. Consequently, interesting alternatives appeared, such as the F1 Tenth, which challenges students, researchers and engineers to embrace in a low cost autonomous racing competition while developing control algorithms, that rely on sensors and strategies used in real life applications. This thesis focus on the comparison of different control algorithms and their effectiveness, that are present in a racing aspect of the F1 Tenth competition. In this thesis, efforts were put into developing a robotic autonomous car, relying on Robot Operative System, ROS, that not only meet the specifications from the F1 Tenth rules, but also allowed to establish a testbed for different future autonomous driving research.Obter uma condução autónoma segura está longe de uma visão dos dias de hoje, com exemplos como a Uber, Google e o mais famoso deles todos, a Tesla, que já foram globalmente introduzidos com sucesso. Investigadores e engenheiros têm colocado um empenho tremendo e vão continuar a fazê-lo nos próximos anos, a desenvolver algoritmos de controlo precisos e seguros, bem como tecnologias que serão colocados nos carros autónomos do futuro. Para além destes casos de sucesso bem conhecidos, algum foco tem sido colocado em competições de corridas de carros autónomos, como por exemplo o Roborace. O facto ´e que apesar do progresso significante que tem sido feito, fazer testes em carros reais em cenários verdadeiros, requer grande investimento financeiro, tornando impossível para muitos grupos de investigação investir na área. Consequentemente, apareceram alternativas relevantes, tal como o F1 Tenth, que desafia estudantes, investigadores e engenheiros a aderir a uma competição de baixos custos de corridas autónomas, enquanto desenvolvem algoritmos de controlo, que dependem de sensores e estratégias usadas em aplicações reais. Esta tese foca-se na comparação de diferentes algoritmos de controlo e na eficácia dos mesmos, que estão presentes num cenário de corrida da competição do F1 Tenth. Nesta tese, foram colocados muitos esforços para o desenvolvimento de um carro autónomo robótico, baseado em Robot Operative System, ROS, que não só vai de encontro `as especificações do F1 Tenth, mas que também permita estabelecer uma plataforma para futuras investigações de condução autónoma

    Inertial learning and haptics for legged robot state estimation in visually challenging environments

    Get PDF
    Legged robots have enormous potential to automate dangerous or dirty jobs because they are capable of traversing a wide range of difficult terrains such as up stairs or through mud. However, a significant challenge preventing widespread deployment of legged robots is a lack of robust state estimation, particularly in visually challenging conditions such as darkness or smoke. In this thesis, I address these challenges by exploiting proprioceptive sensing from inertial, kinematic and haptic sensors to provide more accurate state estimation when visual sensors fail. Four different methods are presented, including the use of haptic localisation, terrain semantic localisation, learned inertial odometry, and deep learning to infer the evolution of IMU biases. The first approach exploits haptics as a source of proprioceptive localisation by comparing geometric information to a prior map. The second method expands on this concept by fusing both semantic and geometric information, allowing for accurate localisation on diverse terrain. Next, I combine new techniques in inertial learning with classical IMU integration and legged robot kinematics to provide more robust state estimation. This is further developed to use only IMU data, for an application entirely different from robotics: 3D reconstruction of bone with a handheld ultrasound scanner. Finally, I present the novel idea of using deep learning to infer the evolution of IMU biases, improving state estimation in exteroceptive systems where vision fails. Legged robots have the potential to benefit society by automating dangerous, dull, or dirty jobs and by assisting first responders in emergency situations. However, there remain many unsolved challenges to the real-world deployment of legged robots, including accurate state estimation in vision-denied environments. The work presented in this thesis takes a step towards solving these challenges and enabling the deployment of legged robots in a variety of applications

    DLL: Direct LIDAR Localization. A map-based localization approach for aerial robots

    Full text link
    This paper presents DLL, a fast direct map-based localization technique using 3D LIDAR for its application to aerial robots. DLL implements a point cloud to map registration based on non-linear optimization of the distance of the points and the map, thus not requiring features, neither point correspondences. Given an initial pose, the method is able to track the pose of the robot by refining the predicted pose from odometry. Through benchmarks using real datasets and simulations, we show how the method performs much better than Monte-Carlo localization methods and achieves comparable precision to other optimization-based approaches but running one order of magnitude faster. The method is also robust under odometric errors. The approach has been implemented under the Robot Operating System (ROS), and it is publicly available.Comment: Accepted for IROS2021. Associated code can be downloaded from https://github.com/robotics-upo/dl
    • …
    corecore