958 research outputs found

    Near minimum time path planning for bearing-only localisation and mapping

    Full text link
    The main contribution of this paper is an algorithm for integrating motion planning and simultaneous localisation and mapping (SLAM). Accuracy of the maps and the robot locations computed using SLAM is strongly dependent on the characteristics of the environment, for example feature density, as well as the speed and direction of motion of the robot. Appropriate control of the robot motion is particularly important in bearing-only SLAM, where the information from a moving sensor is essential. In this paper a near minimum time path planning algorithm with a finite planning horizon is proposed for bearing-only SLAM. The objective of the algorithm is to achieve a predefined mapping precision while maintaining acceptable vehicle location uncertainty in the minimum time. Simulation results have shown the effectiveness of the proposed method. © 2005 IEEE

    Active SLAM using model predictive control and attractor based exploration

    Full text link
    Active SLAM poses the challenge for an autonomous robot to plan efficient paths simultaneous to the SLAM process. The uncertainties of the robot, map and sensor measurements, and the dynamic and motion constraints need to be considered in the planning process. In this paper, the active SLAM problem is formulated as an optimal trajectory planning problem. A novel technique is introduced that utilises an attractor combined with local planning strategies such as Model Predictive Control (a.k.a. Receding Horizon) to solve this problem. An attractor provides high level task intentions and incorporates global information about the environment for the local planner, thereby eliminating the need for costly global planning with longer horizons. It is demonstrated that trajectory planning with an attractor results in improved performance over systems that have local planning alone. © 2006 IEEE

    WiROS: WiFi sensing toolbox for robotics

    Full text link
    Many recent works have explored using WiFi-based sensing to improve SLAM, robot manipulation, or exploration. Moreover, widespread availability makes WiFi the most advantageous RF signal to leverage. But WiFi sensors lack an accurate, tractable, and versatile toolbox, which hinders their widespread adoption with robot's sensor stacks. We develop WiROS to address this immediate need, furnishing many WiFi-related measurements as easy-to-consume ROS topics. Specifically, WiROS is a plug-and-play WiFi sensing toolbox providing access to coarse-grained WiFi signal strength (RSSI), fine-grained WiFi channel state information (CSI), and other MAC-layer information (device address, packet id's or frequency-channel information). Additionally, WiROS open-sources state-of-art algorithms to calibrate and process WiFi measurements to furnish accurate bearing information for received WiFi signals. The open-sourced repository is: https://github.com/ucsdwcsng/WiRO

    ViWiD: Leveraging WiFi for Robust and Resource-Efficient SLAM

    Full text link
    Recent interest towards autonomous navigation and exploration robots for indoor applications has spurred research into indoor Simultaneous Localization and Mapping (SLAM) robot systems. While most of these SLAM systems use Visual and LiDAR sensors in tandem with an odometry sensor, these odometry sensors drift over time. To combat this drift, Visual SLAM systems deploy compute and memory intensive search algorithms to detect `Loop Closures', which make the trajectory estimate globally consistent. To circumvent these resource (compute and memory) intensive algorithms, we present ViWiD, which integrates WiFi and Visual sensors in a dual-layered system. This dual-layered approach separates the tasks of local and global trajectory estimation making ViWiD resource efficient while achieving on-par or better performance to state-of-the-art Visual SLAM. We demonstrate ViWiD's performance on four datasets, covering over 1500 m of traversed path and show 4.3x and 4x reduction in compute and memory consumption respectively compared to state-of-the-art Visual and Lidar SLAM systems with on par SLAM performance

    Learning to Look Around: Intelligently Exploring Unseen Environments for Unknown Tasks

    Full text link
    It is common to implicitly assume access to intelligently captured inputs (e.g., photos from a human photographer), yet autonomously capturing good observations is itself a major challenge. We address the problem of learning to look around: if a visual agent has the ability to voluntarily acquire new views to observe its environment, how can it learn efficient exploratory behaviors to acquire informative observations? We propose a reinforcement learning solution, where the agent is rewarded for actions that reduce its uncertainty about the unobserved portions of its environment. Based on this principle, we develop a recurrent neural network-based approach to perform active completion of panoramic natural scenes and 3D object shapes. Crucially, the learned policies are not tied to any recognition task nor to the particular semantic content seen during training. As a result, 1) the learned "look around" behavior is relevant even for new tasks in unseen environments, and 2) training data acquisition involves no manual labeling. Through tests in diverse settings, we demonstrate that our approach learns useful generic policies that transfer to new unseen tasks and environments. Completion episodes are shown at https://goo.gl/BgWX3W

    Towards Visual Localization, Mapping and Moving Objects Tracking by a Mobile Robot: a Geometric and Probabilistic Approach

    Get PDF
    Dans cette thĂšse, nous rĂ©solvons le problĂšme de reconstruire simultanĂ©ment une reprĂ©sentation de la gĂ©omĂ©trie du monde, de la trajectoire de l'observateur, et de la trajectoire des objets mobiles, Ă  l'aide de la vision. Nous divisons le problĂšme en trois Ă©tapes : D'abord, nous donnons une solution au problĂšme de la cartographie et localisation simultanĂ©es pour la vision monoculaire qui fonctionne dans les situations les moins bien conditionnĂ©es gĂ©omĂ©triquement. Ensuite, nous incorporons l'observabilitĂ© 3D instantanĂ©e en dupliquant le matĂ©riel de vision avec traitement monoculaire. Ceci Ă©limine les inconvĂ©nients inhĂ©rents aux systĂšmes stĂ©rĂ©o classiques. Nous ajoutons enfin la dĂ©tection et suivi des objets mobiles proches en nous servant de cette observabilitĂ© 3D. Nous choisissons une reprĂ©sentation Ă©parse et ponctuelle du monde et ses objets. La charge calculatoire des algorithmes de perception est allĂ©gĂ©e en focalisant activement l'attention aux rĂ©gions de l'image avec plus d'intĂ©rĂȘt. ABSTRACT : In this thesis we give new means for a machine to understand complex and dynamic visual scenes in real time. In particular, we solve the problem of simultaneously reconstructing a certain representation of the world's geometry, the observer's trajectory, and the moving objects' structures and trajectories, with the aid of vision exteroceptive sensors. We proceeded by dividing the problem into three main steps: First, we give a solution to the Simultaneous Localization And Mapping problem (SLAM) for monocular vision that is able to adequately perform in the most ill-conditioned situations: those where the observer approaches the scene in straight line. Second, we incorporate full 3D instantaneous observability by duplicating vision hardware with monocular algorithms. This permits us to avoid some of the inherent drawbacks of classic stereo systems, notably their limited range of 3D observability and the necessity of frequent mechanical calibration. Third, we add detection and tracking of moving objects by making use of this full 3D observability, whose necessity we judge almost inevitable. We choose a sparse, punctual representation of both the world and the moving objects in order to alleviate the computational payload of the image processing algorithms, which are required to extract the necessary geometrical information out of the images. This alleviation is additionally supported by active feature detection and search mechanisms which focus the attention to those image regions with the highest interest. This focusing is achieved by an extensive exploitation of the current knowledge available on the system (all the mapped information), something that we finally highlight to be the ultimate key to success

    Development of a low-cost SLAM radar for applications in robotics.

    Get PDF
    The current state of SLAM radar is quite advanced, featuring various methods of data retrieval. One of the methods used is that of video telemetry to locate “common spots” in the surrounding environment which provide positional information during motion. Another method is that of using high-speed high-resolution laser measurement tools which provide a 360° horizontal field of view and a 90° vertical field of view. These systems create vast amounts of point cloud data and are expensive, ranging from £1,000 upwards. These systems are often unsuitable for small competition robots due to these reasons. The developments discussed in this paper describes various alternative measurement technologies, such as ultrasonic and infra-red and how these can be adapted with the addition of a mechanical drive to provide an almost real-time 360° horizontal field of view and an adjustable vertical field of view

    Development and Flight of a Robust Optical-Inertial Navigation System Using Low-Cost Sensors

    Get PDF
    This research develops and tests a precision navigation algorithm fusing optical and inertial measurements of unknown objects at unknown locations. It provides an alternative to the Global Positioning System (GPS) as a precision navigation source, enabling passive and low-cost navigation in situations where GPS is denied/unavailable. This paper describes two new contributions. First, a rigorous study of the fundamental nature of optical/inertial navigation is accomplished by examining the observability grammian of the underlying measurement equations. This analysis yields a set of design principles guiding the development of optical/inertial navigation algorithms. The second contribution of this research is the development and flight test of an optical-inertial navigation system using low-cost and passive sensors (including an inexpensive commercial-grade inertial sensor, which is unsuitable for navigation by itself). This prototype system was built and flight tested at the U.S. Air Force Test Pilot School. The algorithm that was implemented leveraged the design principles described above, and used images from a single camera. It was shown (and explained by the observability analysis) that the system gained significant performance by aiding it with a barometric altimeter and magnetic compass, and by using a digital terrain database (DTED). The (still) low-cost and passive system demonstrated performance comparable to high quality navigation-grade inertial navigation systems, which cost an order of magnitude more than this optical-inertial prototype. The resultant performance of the system tested provides a robust and practical navigation solution for Air Force aircraft
    • 

    corecore