29,289 research outputs found

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration

    The Problem of Human-following for a Mobile Robot

    Get PDF
    The problem of human-following for mobile robotic systems have been extensively studied. There are a number of approaches for different types of robots and sensor systems. In particular, different equipment of the environment and sensor-based methods by using a special suit have been applied for solution of the problem ofhuman-following for mobile robots. This paper proposes an algorithm for the problem of human-following in an unequipped indoor environment for a low-cost mobile robot with a single visual sensor. We consider the results of computational experiments. Also, we consider the results of robotic experiments for day and night navigation

    Virtual Borders: Accurate Definition of a Mobile Robot's Workspace Using Augmented Reality

    Full text link
    We address the problem of interactively controlling the workspace of a mobile robot to ensure a human-aware navigation. This is especially of relevance for non-expert users living in human-robot shared spaces, e.g. home environments, since they want to keep the control of their mobile robots, such as vacuum cleaning or companion robots. Therefore, we introduce virtual borders that are respected by a robot while performing its tasks. For this purpose, we employ a RGB-D Google Tango tablet as human-robot interface in combination with an augmented reality application to flexibly define virtual borders. We evaluated our system with 15 non-expert users concerning accuracy, teaching time and correctness and compared the results with other baseline methods based on visual markers and a laser pointer. The experimental results show that our method features an equally high accuracy while reducing the teaching time significantly compared to the baseline methods. This holds for different border lengths, shapes and variations in the teaching process. Finally, we demonstrated the correctness of the approach, i.e. the mobile robot changes its navigational behavior according to the user-defined virtual borders.Comment: Accepted on 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), supplementary video: https://youtu.be/oQO8sQ0JBR

    Audio Visual Language Maps for Robot Navigation

    Full text link
    While interacting in the world is a multi-sensory experience, many robots continue to predominantly rely on visual perception to map and navigate in their environments. In this work, we propose Audio-Visual-Language Maps (AVLMaps), a unified 3D spatial map representation for storing cross-modal information from audio, visual, and language cues. AVLMaps integrate the open-vocabulary capabilities of multimodal foundation models pre-trained on Internet-scale data by fusing their features into a centralized 3D voxel grid. In the context of navigation, we show that AVLMaps enable robot systems to index goals in the map based on multimodal queries, e.g., textual descriptions, images, or audio snippets of landmarks. In particular, the addition of audio information enables robots to more reliably disambiguate goal locations. Extensive experiments in simulation show that AVLMaps enable zero-shot multimodal goal navigation from multimodal prompts and provide 50% better recall in ambiguous scenarios. These capabilities extend to mobile robots in the real world - navigating to landmarks referring to visual, audio, and spatial concepts. Videos and code are available at: https://avlmaps.github.io.Comment: Project page: https://avlmaps.github.io

    Smooth and Collision-Free Navigation for Multiple Mobile Robots and Video Game Characters

    Get PDF
    The navigation of multiple mobile robots or virtual agents through environments containing static and dynamic obstacles to specified goal locations is an important problem in mobile robotics, many video games, and simulated environments. Moreover, technological advances in mobile robot hardware and video games consoles have allowed increasing numbers of mobile robots or virtual agents to navigate shared environments simultaneously. However, coordinating the navigation of large groups of mobile robots or virtual agents remains a difficult task. Kinematic and dynamic constraints and the effects of sensor and actuator uncertainty exaggerate the challenge of navigating multiple physical mobile robots, and video games players demand plausible motion and an ever increasing visual fidelity of virtual agents without sacrificing frame rate. We present new methods for navigating multiple mobile robots or virtual agents through shared environments, each using formulations based on velocity obstacles. These include algorithms that allow navigation through environments in two-dimensional or three-dimensional workspaces containing both static and dynamic obstacles without collisions or oscillations. Each mobile robot or virtual agent senses its surroundings and acts independently, without central coordination or inter-communication with its neighbors, implicitly assuming the neighbors use the same navigation strategy based on the notion of reciprocity. We use the position, velocity, and physical extent of neighboring mobile robots or virtual agents to compute their future trajectories to avoid collisions locally and show that, in principle, it is possible to theoretically guarantee that the motion of each mobile robot or virtual agent is smooth. Moreover, we demonstrate direct, collision-free, and oscillation-free navigation in experiments using physical iRobot Create mobile robots, simulations of multiple differential-drive robots or simple-airplanes, and video games levels containing hundreds of virtual agents.Doctor of Philosoph

    Tapered whisker reservoir computing for real-time terrain identification-based navigation

    Get PDF
    This paper proposes a new method for real-time terrain recognition-based navigation for mobile robots. Mobile robots performing tasks in unstructured environments need to adapt their trajectories in real-time to achieve safe and efficient navigation in complex terrains. However, current methods largely depend on visual and IMU (inertial measurement units) that demand high computational resources for real-time applications. In this paper, a real-time terrain identification-based navigation method is proposed using an on-board tapered whisker-based reservoir computing system. The nonlinear dynamic response of the tapered whisker was investigated in various analytical and Finite Element Analysis frameworks to demonstrate its reservoir computing capabilities. Numerical simulations and experiments were cross-checked with each other to verify that whisker sensors can separate different frequency signals directly in the time domain and demonstrate the computational superiority of the proposed system, and that different whisker axis locations and motion velocities provide variable dynamical response information. Terrain surface-following experiments demonstrated that our system could accurately identify changes in the terrain in real-time and adjust its trajectory to stay on specific terrain
    corecore