3 research outputs found

    Grey-Level Cooccurrence Matrix Performance Evaluation for Heading Angle Estimation of Moveable Vision System in Static Environment

    Get PDF
    A method of extracting information in estimating heading angle of vision system is presented. Integration of grey-level cooccurrence matrix (GLCM) in an area of interest selection is carried out to choose a suitable region that is feasible for optical flow generation. The selected area is employed for optical flow generation by using Horn-Schunck method. From the generated optical flow, heading angle is estimated and enhanced via moving median filter (MMF). In order to ascertain the effectiveness of GLCM, we compared the result with a different estimation method of optical flow which is generated directly from untouched greyscale images. The performance of GLCM is compared to the true heading, and the error is evaluated through mean absolute deviation (MAE). The result ensured that GLCM can improve the estimation result of the heading angle of vision system significantly

    Video local pattern based image matching for visual mapping

    No full text
    Image matching plays an important role in visual mapping, a critical task of vision based mobile robot navigation. Based on the observation that the visual content in these video sequences generally changes in a slow and continuous mode, a concept of video local pattern is proposed to model each video frame, which is defined as a set of frames that are visually-similar and temporally-adjacent to that frame. Instead of manually labelling, a tracking based method is developed to automatically detect the local pattern for each frame. A model is then estimated from a local pattern, and the matching of images is performed by comparing these models. Experimental results demonstrated its improvement over that comparing individual frame

    Video Local Pattern based Image Matching for Visual Mapping

    No full text
    Image matching plays an important role in visual mapping, a critical task of vision based mobile robot navigation. Based on the observation that the visual content in these video sequences generally changes in a slow and continuous mode, a concept of "video local pattern" is proposed to model each video frame, which is defined as a set of frames that are visuallysimilar and temporally-adjacent to that frame. Instead of manually labelling, a tracking based method is developed to automatically detect the local pattern for each frame. A model is then estimated from a local pattern, and the matching of images is performed by comparing these models. Experimental results demonstrated its improvement over that comparing individual frames
    corecore