317 research outputs found

    Visual Localisation of Quadruped Walking Robots

    Get PDF

    Searching and tracking people with cooperative mobile robots

    Get PDF
    The final publication is available at link.springer.comSocial robots should be able to search and track people in order to help them. In this paper we present two different techniques for coordinated multi-robot teams for searching and tracking people. A probability map (belief) of a target person location is maintained, and to initialize and update it, two methods were implemented and tested: one based on a reinforcement learning algorithm and the other based on a particle filter. The person is tracked if visible, otherwise an exploration is done by making a balance, for each candidate location, between the belief, the distance, and whether close locations are explored by other robots of the team. The validation of the approach was accomplished throughout an extensive set of simulations using up to five agents and a large amount of dynamic obstacles; furthermore, over three hours of real-life experiments with two robots searching and tracking were recorded and analysed.Peer ReviewedPostprint (author's final draft

    Self-localization based on Image Features of Omni-directional Image

    Get PDF
    Omni-vision system using an omni-mirror is popular to acquire environment information around an autonomous mobile robot. In RoboCup soccer middle size robot league in particular, self-localization methods based on white line extraction on the soccer field are popular. We have studied a self-localization method based on image features, for example, SIFT and SURF, so far. Comparative studies with a conventional self-localization method based on white line extraction are conducted. Compared to the self-localization method based on white line extraction, the method based on image feature can be applied to a general environment with a compact database

    Object Position Estimation based on Dual Sight Perspective Configuration

    Get PDF
    Development of the coordination system requires the dataset because the dataset could provide information around the system that the coordination system can use to make decisions. Therefore, the capability to process and display data-related positions of objects around the robots is necessary. This paper provides a method to predict an object’s position. This method is based on the Indoor Positioning System (IPS) idea and object position estimation with the multi-camera system (i.e., stereo vision). This method needs two input data to estimate the ball position: the input image and the robot’s relative position. The approach adopts simple and easy calculation technics: trigonometry, angle rotations, and linear function. This method was tested on a ROS and Gazebo simulation platform. The experimental result shows that this configuration could estimate the object’s position with Mean Squared Error was 0.383 meters.  Besides, R squared distance calibration value is 0.9932, which implies that this system worked very well at estimating an object’s position.Development of the coordination system requires the dataset because the dataset could provide information around the system that the coordination system can use to make decisions. Therefore, the capability to process and display data-related positions of objects around the robots is necessary. This paper provides a method to predict an object’s position. This method is based on the Indoor Positioning System (IPS) idea and object position estimation with the multi-camera system (i.e., stereo vision). This method needs two input data to estimate the ball position: the input image and the robot’s relative position. The approach adopts simple and easy calculation technics: trigonometry, angle rotations, and linear function. This method was tested on a ROS and Gazebo simulation platform. The experimental result shows that this configuration could estimate the object’s position with Mean Squared Error was 0.383 meters.  Besides, R squared distance calibration value is 0.9932, which implies that this system worked very well at estimating an object’s position

    A reliability-based particle filter for humanoid robot self-localization in Robocup Standard Platform League

    Get PDF
    This paper deals with the problem of humanoid robot localization and proposes a new method for position estimation that has been developed for the RoboCup Standard Platform League environment. Firstly, a complete vision system has been implemented in the Nao robot platform that enables the detection of relevant field markers. The detection of field markers provides some estimation of distances for the current robot position. To reduce errors in these distance measurements, extrinsic and intrinsic camera calibration procedures have been developed and described. To validate the localization algorithm, experiments covering many of the typical situations that arise during RoboCup games have been developed: ranging from degradation in position estimation to total loss of position (due to falls, ‘kidnapped robot’, or penalization). The self-localization method developed is based on the classical particle filter algorithm. The main contribution of this work is a new particle selection strategy. Our approach reduces the CPU computing time required for each iteration and so eases the limited resource availability problem that is common in robot platforms such as Nao. The experimental results show the quality of the new algorithm in terms of localization and CPU time consumption.This work has been supported by the Spanish Science and Innovation Ministry (MICINN) under the CICYT project COBAMI: DPI2011-28507-C02-01/02. The responsibility for the content remains with the authors.Munera Sánchez, E.; Muñoz Alcobendas, M.; Blanes Noguera, F.; Benet Gilabert, G.; SimĂł Ten, JE. (2013). A reliability-based particle filter for humanoid robot self-localization in Robocup Standard Platform League. Sensors. 13(11):14954-14983. https://doi.org/10.3390/s131114954S1495414983131
    • …
    corecore