8,784 research outputs found

    Mixed marker-based/marker-less visual odometry system for mobile robots

    Get PDF
    When moving in generic indoor environments, robotic platforms generally rely solely on information provided by onboard sensors to determine their position and orientation. However, the lack of absolute references often leads to the introduction of severe drifts in estimates computed, making autonomous operations really hard to accomplish. This paper proposes a solution to alleviate the impact of the above issues by combining two vision‐based pose estimation techniques working on relative and absolute coordinate systems, respectively. In particular, the unknown ground features in the images that are captured by the vertical camera of a mobile platform are processed by a vision‐based odometry algorithm, which is capable of estimating the relative frame‐to‐frame movements. Then, errors accumulated in the above step are corrected using artificial markers displaced at known positions in the environment. The markers are framed from time to time, which allows the robot to maintain the drifts bounded by additionally providing it with the navigation commands needed for autonomous flight. Accuracy and robustness of the designed technique are demonstrated using an off‐the‐shelf quadrotor via extensive experimental test

    Invariant EKF Design for Scan Matching-aided Localization

    Full text link
    Localization in indoor environments is a technique which estimates the robot's pose by fusing data from onboard motion sensors with readings of the environment, in our case obtained by scan matching point clouds captured by a low-cost Kinect depth camera. We develop both an Invariant Extended Kalman Filter (IEKF)-based and a Multiplicative Extended Kalman Filter (MEKF)-based solution to this problem. The two designs are successfully validated in experiments and demonstrate the advantage of the IEKF design

    Data association and occlusion handling for vision-based people tracking by mobile robots

    Get PDF
    This paper presents an approach for tracking multiple persons on a mobile robot with a combination of colour and thermal vision sensors, using several new techniques. First, an adaptive colour model is incorporated into the measurement model of the tracker. Second, a new approach for detecting occlusions is introduced, using a machine learning classifier for pairwise comparison of persons (classifying which one is in front of the other). Third, explicit occlusion handling is incorporated into the tracker. The paper presents a comprehensive, quantitative evaluation of the whole system and its different components using several real world data sets

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    Robot training using system identification

    Get PDF
    This paper focuses on developing a formal, theory-based design methodology to generate transparent robot control programs using mathematical functions. The research finds its theoretical roots in robot training and system identification techniques such as Armax (Auto-Regressive Moving Average models with eXogenous inputs) and Narmax (Non-linear Armax). These techniques produce linear and non-linear polynomial functions that model the relationship between a robot’s sensor perception and motor response. The main benefits of the proposed design methodology, compared to the traditional robot programming techniques are: (i) It is a fast and efficient way of generating robot control code, (ii) The generated robot control programs are transparent mathematical functions that can be used to form hypotheses and theoretical analyses of robot behaviour, and (iii) It requires very little explicit knowledge of robot programming where end-users/programmers who do not have any specialised robot programming skills can nevertheless generate task-achieving sensor-motor couplings. The nature of this research is concerned with obtaining sensor-motor couplings, be it through human demonstration via the robot, direct human demonstration, or other means. The viability of our methodology has been demonstrated by teaching various mobile robots different sensor-motor tasks such as wall following, corridor passing, door traversal and route learning

    Indoor Localization System based on Artificial Landmarks and Monocular Vision

    Get PDF
     This paper presents a visual localization approach that is suitable for domestic and industrial environments as it enables accurate, reliable and robust pose estimation. The mobile robot is equipped with a single camera which update sits pose whenever a landmark is available on the field of view. The innovation presented by this research focuses on the artificial landmark system which has the ability to detect the presence of the robot, since both entities communicate with each other using an infrared signal protocol modulated in frequency. Besides this communication capability, each landmark has several high intensity light-emitting diodes (LEDs) that shine only for some instances according to the communication, which makes it possible for the camera shutter and the blinking of the LEDs to synchronize. This synchronization increases the system tolerance concerning changes in brightness in the ambient lights over time, independently of the landmarks location. Therefore, the environment’s ceiling is populated with several landmarks and an Extended Kalman Filter is used to combine the dead-reckoning and landmark information. This increases the flexibility of the system by reducing the number of landmarks required. The experimental evaluation was conducted in a real indoor environment with an autonomous wheelchair prototype
    corecore