14,550 research outputs found

    Motion features to enhance scene segmentation in active visual attention

    Get PDF
    A new computational model for active visual attention is introduced in this paper. The method extracts motion and shape features from video image sequences, and integrates these features to segment the input scene. The aim of this paper is to highlight the importance of the motion features present in our algorithms in the task of refining and/or enhancing scene segmentation in the method proposed. The estimation of these motion parameters is performed at each pixel of the input image by means of the accumulative computation method, using the so-called permanency memories. The paper shows some examples of how to use the ?motion presence?, ?module of the velocity? and ?angle of the velocity? motion features, all obtained from accumulative computation method, to adjust different scene segmentation outputs in this dynamic visual attention method

    Accumulative computation method for motion features extraction in active selective visual attention

    Get PDF
    A new method for active visual attention is briefly introduced in this paper. The method extracts motion and shape features from indefinite image sequences, and integrates these features to segment the input scene. The aim of this paper is to highlight the importance of the accumulative computation method for motion features extraction in the active selective visual attention model proposed. We calculate motion presence and velocity at each pixel of the input image by means of accumulative computation. The paper shows an example of how to use motion features to enhance scene segmentation in this active visual attention method

    Joint Optical Flow and Temporally Consistent Semantic Segmentation

    Full text link
    The importance and demands of visual scene understanding have been steadily increasing along with the active development of autonomous systems. Consequently, there has been a large amount of research dedicated to semantic segmentation and dense motion estimation. In this paper, we propose a method for jointly estimating optical flow and temporally consistent semantic segmentation, which closely connects these two problem domains and leverages each other. Semantic segmentation provides information on plausible physical motion to its associated pixels, and accurate pixel-level temporal correspondences enhance the accuracy of semantic segmentation in the temporal domain. We demonstrate the benefits of our approach on the KITTI benchmark, where we observe performance gains for flow and segmentation. We achieve state-of-the-art optical flow results, and outperform all published algorithms by a large margin on challenging, but crucial dynamic objects.Comment: 14 pages, Accepted for CVRSUAD workshop at ECCV 201

    Interaction between high-level and low-level image analysis for semantic video object extraction

    Get PDF
    Authors of articles published in EURASIP Journal on Advances in Signal Processing are the copyright holders of their articles and have granted to any third party, in advance and in perpetuity, the right to use, reproduce or disseminate the article, according to the SpringerOpen copyright and license agreement (http://www.springeropen.com/authors/license)
    corecore