19,671 research outputs found

    Active vision-based localization for robots in a home-tour scenario

    Get PDF
    Self-Localization is a crucial task for mobile robots. It is not only a requirement for auto navigation but also provides contextual information to support human robot interaction (HRI). In this paper we present an active vision-based localization method for integration in a complex robot system to work in human interaction scenarios (e.g. home-tour) in a real world apartment. The holistic features used are robust to illumination and structural changes in the scene. The system uses only a single pan-tilt camera shared between different vision applications running in parallel to reduce the number of sensors. Additional information from other modalities (like laser scanners) can be used, profiting of an integration into an existing system. The camera view can be actively adapted and the evaluation showed that different rooms can be discerned

    Self-localization based on Image Features of Omni-directional Image

    Get PDF
    Omni-vision system using an omni-mirror is popular to acquire environment information around an autonomous mobile robot. In RoboCup soccer middle size robot league in particular, self-localization methods based on white line extraction on the soccer field are popular. We have studied a self-localization method based on image features, for example, SIFT and SURF, so far. Comparative studies with a conventional self-localization method based on white line extraction are conducted. Compared to the self-localization method based on white line extraction, the method based on image feature can be applied to a general environment with a compact database

    Development of a Vision-Based Mobile Robot Navigation System for Golf Balls Detection and Location

    Get PDF
    A significant challenge in the design of an autonomous mobile robot is the reliable detection of targets, obstacles and targets tracking. Many types of sensor are used for that purposes such as infrared, sonar, vision sensor and laser. Monocular vision is one of the methods used due to simplicity and computational cost compared to stereo vision. Based on current trends the autonomous mobile robot development, vision sensor is used as different functions such as target recognition, obstacles avoidance, and navigation. To fulfill such demands the mobile robot should be able to estimate the distance of the detected targets and their angles from its current location. From the extracted information, the motions of the mobile robot can be done efficiently for targets retrieval task. This thesis addresses issue on golf balls localization. The sensor used for localization is a single color webcam. The experiment involves stationary golf balls localization at indoor and outdoor scene. The objective is to localize golf balls at various locations to be retrieved by the mobile robot. The distance towards the golf balls are estimated based on their diameter. This is based on the perspective view concept where the golf ball sizes are inversely proportional to their distance from webcam. Golf balls detection is done using color segmentation in RGB (red, green and blue) color space. A vector, a, that represents mean value of the target sample is calculated. Then the mean and standard deviation of each color component is calculated. The threshold value lies in the range μ ± σ which represents a square bounding box in RGB color space with a center at a. Every pixel in the test image is tested whether it lies within the bounding box which contributes to target pixel. The technique for segmentation can avoid high computation time for color image processing. The simple features such as diameter, x-y ratio and area are used as its inputs to the k-nearest neighbors (K-NN) classifier. The software is developed in Visual Basic 6 with a laptop computer acts as a controller and for handling image acquisition and processing. The localization process takes less than one second to be completed. The technique has been tested at indoor and outdoor environment. The efficiency of the estimation is more than 90 percents with a condition that the targets are less than 50 percents occluded

    Real time game field limits recognition for robot self-localization using collinearity in middle-size roboCup soccer

    Get PDF
    Enabling a mobile robot to achieve its self-localization in real-time with vision only, demands for new approaches and new computer algorithms. An approach for giving game field self-localization to a Middle Size RoboCup Soccer robot can be based in two steps: finding the game field lines and evaluating the obtained coordinates calculating the robot coordinates. This paper describes a method to achieve the first step. This approach is based on an algorithm that combines three major features: edge detection, selection and collinearity search. The final target is to retrieve the line segments (defined by its two limits coordinates), which identify the game field boundary lines. These line coordinates will be used on the next step that is the process to calculate the robot position in the game field. Since this first step is to find lines in real time, it is an alternative method to the Hough Transform Method

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    An adaptive appearance-based map for long-term topological localization of mobile robots

    Get PDF
    This work considers a mobile service robot which uses an appearance-based representation of its workplace as a map, where the current view and the map are used to estimate the current position in the environment. Due to the nature of real-world environments such as houses and offices, where the appearance keeps changing, the internal representation may become out of date after some time. To solve this problem the robot needs to be able to adapt its internal representation continually to the changes in the environment. This paper presents a method for creating an adaptive map for long-term appearance-based localization of a mobile robot using long-term and short-term memory concepts, with omni-directional vision as the external sensor

    Monocular navigation for long-term autonomy

    Get PDF
    We present a reliable and robust monocular navigation system for an autonomous vehicle. The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS. Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach. In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled. We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound. The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes. This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation
    corecore