21,886 research outputs found

    Sensor Data Fusion using Unscented Kalman Filter for VOR-based Vision Tracking System for Mobile Robots

    Get PDF
    This paper presents sensor data fusion using Unscented Kalman Filter (UKF) to implement high performance vestibulo-ocular reflex (VOR) based vision tracking system for mobile robots. Information from various sensors is required to be integrated using an efficient sensor fusion algorithm to achieve a continuous and robust vision tracking system. We use data from low cost accelerometer, gyroscope, and encoders to calculate robot motion information. The Unscented Kalman Filter is used as an efficient sensor fusion algorithm. The UKF is an advanced filtering technique which outperforms widely used Extended Kalman Filter (EKF) in many applications. The system is able to compensate for the slip errors by switching between two different UKF models built for slip and no-slip cases. Since the accelerometer error accumulates with time because of the double integration, the system uses accelerometer data only for the slip case UKF model. Using sensor fusion by UKF, the position and orientation of the robot is estimated and is used to rotate the camera mounted on top of the robot towards a fixed target. This concept is derived from the vestibule-ocular reflex (VOR) of the human eye. The experimental results show that the system is able to track the fixed target in various robot motion scenarios including the scenario when an intentional slip is generated during robot navigation

    Vision-based trajectory tracking algorithm with obstacle avoidance for a wheeled mobile robot

    Get PDF
    Wheeled mobile robots are becoming increasingly important in industry as means of transportation, inspection, and operation because of their efficiency and flexibility. The design of efficient algorithms for autonomous or quasi-autonomous mobile robots navigation in dynamic environments is a challenging problem that has been the focus of many researchers dining the past few decades. Computer vision, maybe, is not the most successful sensing modality used in mobile robotics until now (sonar and infra-red sensors for example being preferred), but it is the sensor which is able to give the information ’’what” and ’’where” most completely for the objects a robot is likely to encounter. In this thesis, we deal with using vision system to navigate the mobile robot to track a reference trajectory and using a sensor-based obstacle avoidance method to pass by the objects located on the trajectory. A tracking control algorithm is also described in this thesis. Finally, The experimental results are presented to verify the tracking and control algorithms

    MOMA: Visual Mobile Marker Odometry

    Full text link
    In this paper, we present a cooperative odometry scheme based on the detection of mobile markers in line with the idea of cooperative positioning for multiple robots [1]. To this end, we introduce a simple optimization scheme that realizes visual mobile marker odometry via accurate fixed marker-based camera positioning and analyse the characteristics of errors inherent to the method compared to classical fixed marker-based navigation and visual odometry. In addition, we provide a specific UAV-UGV configuration that allows for continuous movements of the UAV without doing stops and a minimal caterpillar-like configuration that works with one UGV alone. Finally, we present a real-world implementation and evaluation for the proposed UAV-UGV configuration

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    Mixed marker-based/marker-less visual odometry system for mobile robots

    Get PDF
    When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test

    Data association and occlusion handling for vision-based people tracking by mobile robots

    Get PDF
    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets

    Navigation without localisation: reliable teach and repeat based on the convergence theorem

    Full text link
    We present a novel concept for teach-and-repeat visual navigation. The proposed concept is based on a mathematical model, which indicates that in teach-and-repeat navigation scenarios, mobile robots do not need to perform explicit localisation. Rather than that, a mobile robot which repeats a previously taught path can simply `replay' the learned velocities, while using its camera information only to correct its heading relative to the intended path. To support our claim, we establish a position error model of a robot, which traverses a taught path by only correcting its heading. Then, we outline a mathematical proof which shows that this position error does not diverge over time. Based on the insights from the model, we present a simple monocular teach-and-repeat navigation method. The method is computationally efficient, it does not require camera calibration, and it can learn and autonomously traverse arbitrarily-shaped paths. In a series of experiments, we demonstrate that the method can reliably guide mobile robots in realistic indoor and outdoor conditions, and can cope with imperfect odometry, landmark deficiency, illumination variations and naturally-occurring environment changes. Furthermore, we provide the navigation system and the datasets gathered at http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
    corecore