76,818 research outputs found

    Machine Learning in Appearance-based Robot Self-localization

    Full text link
    An appearance-based robot self-localization problem is considered in the machine learning framework. The appearance space is composed of all possible images, which can be captured by a robot's visual system under all robot localizations. Using recent manifold learning and deep learning techniques, we propose a new geometrically motivated solution based on training data consisting of a finite set of images captured in known locations of the robot. The solution includes estimation of the robot localization mapping from the appearance space to the robot localization space, as well as estimation of the inverse mapping for modeling visual image features. The latter allows solving the robot localization problem as the Kalman filtering problem.Comment: 7 pages, 3 figures, ICMLA 2017 conferenc

    Development of a tabletop guidance system for educational robots

    Get PDF
    The guidance of a vehicle in an outdoor setting is typically implemented using a Real Time Kinematic Global Positioning System (RTK-GPS) potentially enhanced by auxiliary sensors such as electronic compasses, rotation encoders, gyroscopes, and vision systems. Since GPS does not function in an indoor setting where educational competitions are often held, an alternative guidance system was developed. This article describes a guidance method that contains a laser-based localization system, which uses a robot-borne single laser transmitter spinning in a horizontal plane at an angular velocity up to 81 radians per second. Sensor arrays positioned in the corners of a flat rectangular table with dimensions of 1.22 m × 1.83 m detected the laser beam passages. The relative time differences among the detections of the laser passages gave an indication of the angles of the sensors with respect to the laser beam transmitter on the robot. These angles were translated into Cartesian coordinates. The guidance of the robot was implemented using a uni-directional wireless serial connection and position feedback from the localization system. Three experiments were conducted to test the system: 1) the accuracy of the static localization system was determined while the robot stood still. In this test the average error among valid measurements was smaller than 0.3 %. However, a maximum of 3.7 % of the measurements were invalid due to several causes. 2) The accuracy of the guidance system was assessed while the robot followed a straight line. The average deviation from this straight line was 3.6 mm while the robot followed a path with a length of approximately 0.9 m. 3) The overall performance of the guidance system was studied while the robot followed a complex path consisting of 33 sub-paths. The conclusion was that the system worked reasonably accurate, unless the robot came in close proximity

    Accurate position tracking with a single UWB anchor

    Full text link
    Accurate localization and tracking are a fundamental requirement for robotic applications. Localization systems like GPS, optical tracking, simultaneous localization and mapping (SLAM) are used for daily life activities, research, and commercial applications. Ultra-wideband (UWB) technology provides another venue to accurately locate devices both indoors and outdoors. In this paper, we study a localization solution with a single UWB anchor, instead of the traditional multi-anchor setup. Besides the challenge of a single UWB ranging source, the only other sensor we require is a low-cost 9 DoF inertial measurement unit (IMU). Under such a configuration, we propose continuous monitoring of UWB range changes to estimate the robot speed when moving on a line. Combining speed estimation with orientation estimation from the IMU sensor, the system becomes temporally observable. We use an Extended Kalman Filter (EKF) to estimate the pose of a robot. With our solution, we can effectively correct the accumulated error and maintain accurate tracking of a moving robot.Comment: Accepted by ICRA202

    Spatial Concept Acquisition for a Mobile Robot that Integrates Self-Localization and Unsupervised Word Discovery from Spoken Sentences

    Full text link
    In this paper, we propose a novel unsupervised learning method for the lexical acquisition of words related to places visited by robots, from human continuous speech signals. We address the problem of learning novel words by a robot that has no prior knowledge of these words except for a primitive acoustic model. Further, we propose a method that allows a robot to effectively use the learned words and their meanings for self-localization tasks. The proposed method is nonparametric Bayesian spatial concept acquisition method (SpCoA) that integrates the generative model for self-localization and the unsupervised word segmentation in uttered sentences via latent variables related to the spatial concept. We implemented the proposed method SpCoA on SIGVerse, which is a simulation environment, and TurtleBot2, which is a mobile robot in a real environment. Further, we conducted experiments for evaluating the performance of SpCoA. The experimental results showed that SpCoA enabled the robot to acquire the names of places from speech sentences. They also revealed that the robot could effectively utilize the acquired spatial concepts and reduce the uncertainty in self-localization.Comment: This paper was accepted in the IEEE Transactions on Cognitive and Developmental Systems. (04-May-2016

    X-View: Graph-Based Semantic Multi-View Localization

    Full text link
    Global registration of multi-view robot data is a challenging task. Appearance-based global localization approaches often fail under drastic view-point changes, as representations have limited view-point invariance. This work is based on the idea that human-made environments contain rich semantics which can be used to disambiguate global localization. Here, we present X-View, a Multi-View Semantic Global Localization system. X-View leverages semantic graph descriptor matching for global localization, enabling localization under drastically different view-points. While the approach is general in terms of the semantic input data, we present and evaluate an implementation on visual data. We demonstrate the system in experiments on the publicly available SYNTHIA dataset, on a realistic urban dataset recorded with a simulator, and on real-world StreetView data. Our findings show that X-View is able to globally localize aerial-to-ground, and ground-to-ground robot data of drastically different view-points. Our approach achieves an accuracy of up to 85 % on global localizations in the multi-view case, while the benchmarked baseline appearance-based methods reach up to 75 %

    A robust extended H-infinity filtering approach to multi-robot cooperative localization in dynamic indoor environments

    Get PDF
    Multi-robot cooperative localization serves as an essential task for a team of mobile robots to work within an unknown environment. Based on the real-time laser scanning data interaction, a robust approach is proposed to obtain optimal multi-robot relative observations using the Metric-based Iterative Closest Point (MbICP) algorithm, which makes it possible to utilize the surrounding environment information directly instead of placing a localization-mark on the robots. To meet the demand of dealing with the inherent non-linearities existing in the multi-robot kinematic models and the relative observations, a robust extended H∞ filtering (REHF) approach is developed for the multi-robot cooperative localization system, which could handle non-Gaussian process and measurement noises with respect to robot navigation in unknown dynamic scenes. Compared with the conventional multi-robot localization system using extended Kalman filtering (EKF) approach, the proposed filtering algorithm is capable of providing superior performance in a dynamic indoor environment with outlier disturbances. Both numerical experiments and experiments conducted for the Pioneer3-DX robots show that the proposed localization scheme is effective in improving both the accuracy and reliability of the performance within a complex environment.This work was supported inpart by the National Natural Science Foundation of China under grants 61075094, 61035005 and 61134009
    corecore