5,420 research outputs found

    A Novel Omnidirectional Stereo Vision System with a Single Camera

    Get PDF
    The omnidirectional vision system has been given increasing attentions in recent years in many engineering research areas such as computer vision and mobile robot since it has wide field of view (FOV). A general method for 360 o omnidirectional image acquisition is the catadioptric approach using a coaxially aligned convex mirror and a conventional camera

    Omnidirectional Vision-based Robot Localization on Soccer Field by Particle Filter

    Get PDF
    [[abstract]]An omnidirectional vision-based localization method based on the particle Alter is proposed to achieve the location of robot on a soccer field in this paper. Two kinds of sensor information are considered in the method to let the robot on the soccer field can estimate its location and then to decide an appropriate strategy. One is the robot action sensor information obtained by the motor's feedback and the other is the observation sensor information obtained by the image captured by an omnidirectional vision system. The action sensor information is used to expect the robot location distribution. The location distribution is represented by particles. The omnidirectional image is used to observe the environment information. The differences between these particles location's environment information and the robot observation environment information are considered to calculate the belief values of particles. Then the posture of the particle with the highest belief is used to be the estimated posture of the robot. Some experimental results are presented to illustrate the effectiveness of the proposed method.[[conferencetype]]國際[[conferencedate]]20100818~20100821[[iscallforpapers]]Y[[conferencelocation]]Taipei, Taiwa

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    Long-term experiments with an adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability

    An adaptive spherical view representation for navigation in changing environments

    Get PDF
    Real-world environments such as houses and offices change over time, meaning that a mobile robot’s map will become out of date. In previous work we introduced a method to update the reference views in a topological map so that a mobile robot could continue to localize itself in a changing environment using omni-directional vision. In this work we extend this longterm updating mechanism to incorporate a spherical metric representation of the observed visual features for each node in the topological map. Using multi-view geometry we are then able to estimate the heading of the robot, in order to enable navigation between the nodes of the map, and to simultaneously adapt the spherical view representation in response to environmental changes. The results demonstrate the persistent performance of the proposed system in a long-term experiment

    A vision-guided parallel parking system for a mobile robot using approximate policy iteration

    Get PDF
    Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions. Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time. A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning)
    corecore