31,587 research outputs found

    Near range path navigation using LGMD visual neural networks

    Get PDF
    In this paper, we proposed a method for near range path navigation for a mobile robot by using a pair of biologically inspired visual neural network – lobula giant movement detector (LGMD). In the proposed binocular style visual system, each LGMD processes images covering a part of the wide field of view and extracts relevant visual cues as its output. The outputs from the two LGMDs are compared and translated into executable motor commands to control the wheels of the robot in real time. Stronger signal from the LGMD in one side pushes the robot away from this side step by step; therefore, the robot can navigate in a visual environment naturally with the proposed vision system. Our experiments showed that this bio-inspired system worked well in different scenarios

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality

    Full text link
    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR

    Topomap: Topological Mapping and Navigation Based on Visual SLAM Maps

    Full text link
    Visual robot navigation within large-scale, semi-structured environments deals with various challenges such as computation intensive path planning algorithms or insufficient knowledge about traversable spaces. Moreover, many state-of-the-art navigation approaches only operate locally instead of gaining a more conceptual understanding of the planning objective. This limits the complexity of tasks a robot can accomplish and makes it harder to deal with uncertainties that are present in the context of real-time robotics applications. In this work, we present Topomap, a framework which simplifies the navigation task by providing a map to the robot which is tailored for path planning use. This novel approach transforms a sparse feature-based map from a visual Simultaneous Localization And Mapping (SLAM) system into a three-dimensional topological map. This is done in two steps. First, we extract occupancy information directly from the noisy sparse point cloud. Then, we create a set of convex free-space clusters, which are the vertices of the topological map. We show that this representation improves the efficiency of global planning, and we provide a complete derivation of our algorithm. Planning experiments on real world datasets demonstrate that we achieve similar performance as RRT* with significantly lower computation times and storage requirements. Finally, we test our algorithm on a mobile robotic platform to prove its advantages.Comment: 8 page

    An adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment

    Indoor Positioning and Navigation

    Get PDF
    In recent years, rapid development in robotics, mobile, and communication technologies has encouraged many studies in the field of localization and navigation in indoor environments. An accurate localization system that can operate in an indoor environment has considerable practical value, because it can be built into autonomous mobile systems or a personal navigation system on a smartphone for guiding people through airports, shopping malls, museums and other public institutions, etc. Such a system would be particularly useful for blind people. Modern smartphones are equipped with numerous sensors (such as inertial sensors, cameras, and barometers) and communication modules (such as WiFi, Bluetooth, NFC, LTE/5G, and UWB capabilities), which enable the implementation of various localization algorithms, namely, visual localization, inertial navigation system, and radio localization. For the mapping of indoor environments and localization of autonomous mobile sysems, LIDAR sensors are also frequently used in addition to smartphone sensors. Visual localization and inertial navigation systems are sensitive to external disturbances; therefore, sensor fusion approaches can be used for the implementation of robust localization algorithms. These have to be optimized in order to be computationally efficient, which is essential for real-time processing and low energy consumption on a smartphone or robot

    Tag Recognition for Quadcopter Drone Movement

    Get PDF
    Unmanned Aerial Vehicle (UAV) drone such as Parrot AR.Drone 2.0 is a flying mobile robot which has been popularly researched for the application of search and rescue mission. In this project, Robot Operating System (ROS), a free open source platform for developing robot control software is used to develop a tag recognition program for drone movement. ROS is popular with mobile robotics application development because sensors data transmission for robot control system analysis will be very handy with the use of ROS nodes and packages once the installation and compilation is done correctly. It is expected that the drone can communicate with a laptop via ROS nodes for sensors data transmission which will be further analyzed and processed for the close-loop control system. The developed program consisting of several packages is aimed to demonstrate the recognition of different tags by the drone which will be transformed into a movement command with respect to the tag recognized; in other words, a visual-based navigation program is developed

    Navigasi Berbasis Behavior dan Fuzzy Logic pada Simulasi Robot Bergerak Otonom

    Get PDF
    Mobile robot is the robotic mechanism that is able to moved automatically. The movement of the robot automatically require a navigation system. Navigation is a method for determining the robot motion. In this study, using a method developed robot navigation behavior with fuzzy logic. The behavior of the robot is divided into several modules, such as walking, avoid obstacles, to follow walls, corridors and conditions of u-shape. In this research designed mobile robot simulation in a visual programming. Robot is equipped with seven distance sensor and divided into several groups to test the behavior that is designed, so that the behavior of the robot generate speed and steering control. Based on experiments that have been conducted shows that mobile robot simulation can run smooth on many conditions. This proves that the implementation of the formation of behavior and fuzzy logic techniques on the robot working well.Keywords : behavior, fuzzy logic, mobile robot]Abstrak—Mobile robot merupakan mekanisme robot yang mampu bergerak otomatis. Pergerakan robot secara otomatis memerlukan suatu sistem navigasi. Navigasi adalah metode untuk menentukan gerak robot. Pada penelitian ini navigasi robot dikembangkan menggunakan metode behavior (perilaku) dengan logika fuzzy. Perilaku robot dibagi menjadi beberapa modul, seperti berjalan, menghindari halangan, mengikuti dinding, koridor maupun kondisi u-shape. Pada penelitian ini dirancang simulasi mobile robot di dalam pemrograman visual. Robot dilengkapi dengan tujuh sensor jarak dan dibagi menjadi beberapa kelompok untuk menguji perilaku yang dirancang, sehingga perilaku robot menghasilkan pengaturan kecepatan dan steering. Berdasarkan percobaan yang telah dilakukan menunjukkan bahwa simulasi mobile robot dapat berjalan mulus (smooth) pada berbagai kondisi. Hal ini membuktikan bahwa implementasi pembentukan behavior dan teknik logika fuzzy pada robot bekerja dengan baik.    Kata Kunci : behavior, logika fuzzy, mobile robo
    corecore