44 research outputs found

    Segmentation of Moving Object with Uncovered Background, Temporary Poses and GMOB

    Get PDF
    AbstractVideo has to be segmented into objects for content-based processing. A number of video object segmentation algorithms have been proposed such as semiautomatic and automatic. Semiautomatic methods adds burden to users and also not suitable for some applications. Automatic segmentation systems are still a challenge, although they are required by many applications. The proposed work aims at contributing to identify the gaps that are present in the current segmentation system and also to give the possible solutions to overcome those gaps so that the accurate and efficient video segmentation system can be developed. The proposed system aims to resolve the issue of uncovered background, Temporary poses and Global motion of background

    Click Carving: Segmenting Objects in Video with Point Clicks

    Full text link
    We present a novel form of interactive video object segmentation where a few clicks by the user helps the system produce a full spatio-temporal segmentation of the object of interest. Whereas conventional interactive pipelines take the user's initialization as a starting point, we show the value in the system taking the lead even in initialization. In particular, for a given video frame, the system precomputes a ranked list of thousands of possible segmentation hypotheses (also referred to as object region proposals) using image and motion cues. Then, the user looks at the top ranked proposals, and clicks on the object boundary to carve away erroneous ones. This process iterates (typically 2-3 times), and each time the system revises the top ranked proposal set, until the user is satisfied with a resulting segmentation mask. Finally, the mask is propagated across the video to produce a spatio-temporal object tube. On three challenging datasets, we provide extensive comparisons with both existing work and simpler alternative methods. In all, the proposed Click Carving approach strikes an excellent balance of accuracy and human effort. It outperforms all similarly fast methods, and is competitive or better than those requiring 2 to 12 times the effort.Comment: A preliminary version of the material in this document was filed as University of Texas technical report no. UT AI16-0

    Supervoxel-Consistent Foreground Propagation in Video

    Full text link
    Abstract. A major challenge in video segmentation is that the fore-ground object may move quickly in the scene at the same time its ap-pearance and shape evolves over time. While pairwise potentials used in graph-based algorithms help smooth labels between neighboring (su-per)pixels in space and time, they offer only a myopic view of consis-tency and can be misled by inter-frame optical flow errors. We propose a higher order supervoxel label consistency potential for semi-supervised foreground segmentation. Given an initial frame with manual annota-tion for the foreground object, our approach propagates the foreground region through time, leveraging bottom-up supervoxels to guide its es-timates towards long-range coherent regions. We validate our approach on three challenging datasets and achieve state-of-the-art results.

    Robust video segment proposals with painless occlusion handling

    Full text link
    corecore