32,718 research outputs found

    Per-Clip Video Object Segmentation

    Full text link
    Recently, memory-based approaches show promising results on semi-supervised video object segmentation. These methods predict object masks frame-by-frame with the help of frequently updated memory of the previous mask. Different from this per-frame inference, we investigate an alternative perspective by treating video object segmentation as clip-wise mask propagation. In this per-clip inference scheme, we update the memory with an interval and simultaneously process a set of consecutive frames (i.e. clip) between the memory updates. The scheme provides two potential benefits: accuracy gain by clip-level optimization and efficiency gain by parallel computation of multiple frames. To this end, we propose a new method tailored for the per-clip inference. Specifically, we first introduce a clip-wise operation to refine the features based on intra-clip correlation. In addition, we employ a progressive matching mechanism for efficient information-passing within a clip. With the synergy of two modules and a newly proposed per-clip based training, our network achieves state-of-the-art performance on Youtube-VOS 2018/2019 val (84.6% and 84.6%) and DAVIS 2016/2017 val (91.9% and 86.1%). Furthermore, our model shows a great speed-accuracy trade-off with varying memory update intervals, which leads to huge flexibility.Comment: CVPR 2022; Code is available at https://github.com/pkyong95/PCVO

    Design of networked visual monitoring systems

    Get PDF
    [[abstract]]We design and implement a networked visual monitoring system for surveillance. Instead of the usual periodical monitoring, the proposed system has an auto-tracking feature which captures the important characteristics of intruders. We integrate two schemes, namely, image segmentation and histogram comparison, to accomplish auto-tracking. The developed image segmentation scheme is able to separate moving objects from the background in real time. Next, the corresponding object centroid and boundary are computed. This information is used to guide the motion of tracking camera to track the intruders and then to take a series of shots, by following a predetermined pattern. We have also developed a multiple objects tracking scheme, based on object color histogram comparison, to overcome object occlusion and disocclusion issues. The designed system can track multiple intruders or follow any particular intruder automatically. To achieve efficient transmission and storage, the captured video is compressed in the H.263 format. Query based on time as well as events are provided. Users can access the system from web browsers to view the monitoring site or manipulate the tracking camera on the Internet. These features are of importance and value to surveillance.[[notice]]補正完畢[[incitationindex]]E

    Online Adaptation of Convolutional Neural Networks for Video Object Segmentation

    Full text link
    We tackle the task of semi-supervised video object segmentation, i.e. segmenting the pixels belonging to an object in the video using the ground truth pixel mask for the first frame. We build on the recently introduced one-shot video object segmentation (OSVOS) approach which uses a pretrained network and fine-tunes it on the first frame. While achieving impressive performance, at test time OSVOS uses the fine-tuned network in unchanged form and is not able to adapt to large changes in object appearance. To overcome this limitation, we propose Online Adaptive Video Object Segmentation (OnAVOS) which updates the network online using training examples selected based on the confidence of the network and the spatial configuration. Additionally, we add a pretraining step based on objectness, which is learned on PASCAL. Our experiments show that both extensions are highly effective and improve the state of the art on DAVIS to an intersection-over-union score of 85.7%.Comment: Accepted at BMVC 2017. This version contains minor changes for the camera ready versio
    corecore