47 research outputs found
Semi-Supervised Video Salient Object Detection Using Pseudo-Labels
Deep learning-based video salient object detection has recently achieved
great success with its performance significantly outperforming any other
unsupervised methods. However, existing data-driven approaches heavily rely on
a large quantity of pixel-wise annotated video frames to deliver such promising
results. In this paper, we address the semi-supervised video salient object
detection task using pseudo-labels. Specifically, we present an effective video
saliency detector that consists of a spatial refinement network and a
spatiotemporal module. Based on the same refinement network and motion
information in terms of optical flow, we further propose a novel method for
generating pixel-level pseudo-labels from sparsely annotated frames. By
utilizing the generated pseudo-labels together with a part of manual
annotations, our video saliency detector learns spatial and temporal cues for
both contrast inference and coherence enhancement, thus producing accurate
saliency maps. Experimental results demonstrate that our proposed
semi-supervised method even greatly outperforms all the state-of-the-art fully
supervised methods across three public benchmarks of VOS, DAVIS, and FBMS.Comment: ICCV2019, code is available at
https://github.com/Kinpzz/RCRNet-Pytorc
Bidirectional ConvLSTMXNet for Brain Tumor Segmentation of MR Images
In recent years, deep learning based networks have achieved good performance in brain tumour segmentation of MR Image. Among the existing networks, U-Net has been successfully applied. In this paper, it is propose deep-learning based Bidirectional Convolutional LSTM XNet (BConvLSTMXNet) for segmentation of brain tumor and using GoogLeNet classify tumor & non-tumor. Evaluated on BRATS-2019 data-set and the results are obtained for classification of tumor and non-tumor with Accuracy: 0.91, Precision: 0.95, Recall: 1.00 & F1-Score: 0.92. Similarly for segmentation of brain tumor obtained Accuracy: 0.99, Specificity: 0.98, Sensitivity: 0.91, Precision: 0.91 & F1-Score: 0.88
RVOS: end-to-end recurrent network for video object segmentation
Multiple object video object segmentation is a challenging task, specially for the zero-shot case, when no object mask is given at the initial frame and the model has to find the objects to be segmented along the sequence. In our work, we propose a Recurrent network for multiple object Video Object Segmentation (RVOS) that is fully end-to-end trainable. Our model incorporates recurrence on two different domains: (i) the spatial, which allows to discover the different object instances within a frame, and (ii) the temporal, which allows to keep the coherence of the segmented objects along time. We train RVOS for zero-shot video object segmentation and are the first ones to report quantitative results for DAVIS-2017 and YouTube-VOS benchmarks. Further, we adapt RVOS for one-shot video object segmentation by using the masks obtained in previous time steps as inputs to be processed by the recurrent module. Our model reaches comparable results to state-of-the-art techniques in YouTube-VOS benchmark and outperforms all previous video object segmentation methods not using online learning in the DAVIS-2017 benchmark. Moreover, our model achieves faster inference runtimes than previous methods, reaching 44ms/frame on a P100 GPU.This research was supported by the Spanish Ministry ofEconomy and Competitiveness and the European RegionalDevelopment Fund (TIN2015-66951-C2-2-R, TIN2015-65316-P & TEC2016-75976-R), the BSC-CNS SeveroOchoa SEV-2015-0493 and LaCaixa-Severo Ochoa Inter-national Doctoral Fellowship programs, the 2017 SGR 1414and the Industrial Doctorates 2017-DI-064 & 2017-DI-028from the Government of CataloniaPeer ReviewedPostprint (published version