843 research outputs found

    OVSNet : Towards One-Pass Real-Time Video Object Segmentation

    Full text link
    Video object segmentation aims at accurately segmenting the target object regions across consecutive frames. It is technically challenging for coping with complicated factors (e.g., shape deformations, occlusion and out of the lens). Recent approaches have largely solved them by using backforth re-identification and bi-directional mask propagation. However, their methods are extremely slow and only support offline inference, which in principle cannot be applied in real time. Motivated by this observation, we propose a efficient detection-based paradigm for video object segmentation. We propose an unified One-Pass Video Segmentation framework (OVS-Net) for modeling spatial-temporal representation in a unified pipeline, which seamlessly integrates object detection, object segmentation, and object re-identification. The proposed framework lends itself to one-pass inference that effectively and efficiently performs video object segmentation. Moreover, we propose a maskguided attention module for modeling the multi-scale object boundary and multi-level feature fusion. Experiments on the challenging DAVIS 2017 demonstrate the effectiveness of the proposed framework with comparable performance to the state-of-the-art, and the great efficiency about 11.5 FPS towards pioneering real-time work to our knowledge, more than 5 times faster than other state-of-the-art methods.Comment: 10 pages, 6 figure

    Multi-View 3D Object Detection Network for Autonomous Driving

    Full text link
    This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.Comment: To appear in IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 201

    FUSI CITRA DENGAN SCALE INVARIANT FEATURE TRANSFORM (SIFT) SEBAGAI REGISTRASI CITRA

    Get PDF
    Fusi citra adalah proses menggabungkan dua atau lebih citra ke dalam satu citra, dengan mempertahankan fitur penting dari masing-masing gambar. Fusi citra adalah salah satu cara untuk menyelesaikan masalah gambar yang tidak fokus hasil dari penggunaan kamera non-profesional. Fusi citra juga dapat digunakan dalam penginderaan jauh, pengamatan, dan aplikasi medis. Dalam penelitian ini, diusulkan teknik fusi citra baru dengan menggunakan SIFT (Scale Invariant Feature Transform) sebagai registrasi citra. Prosedur fusi dilakukan dengan mencocokkan fitur gambar SIFT menggunakan RANSAC dan kemudian menggabungkan dua citra dengan aturan rata-rata piksel. Langkah terakhir membandingkan  hasil fusi citra menggunakan QABF, intensitas rata-rata piksel dan standard deviasi. Hasil eksperimental menunjukkan bahwa metode yang diusulkan mengungguli teknik fusi konvensional, terutama untuk citra yang mengalami translasi atau rotasi

    Neural Contourlet Network for Monocular 360 Depth Estimation

    Full text link
    For a monocular 360 image, depth estimation is a challenging because the distortion increases along the latitude. To perceive the distortion, existing methods devote to designing a deep and complex network architecture. In this paper, we provide a new perspective that constructs an interpretable and sparse representation for a 360 image. Considering the importance of the geometric structure in depth estimation, we utilize the contourlet transform to capture an explicit geometric cue in the spectral domain and integrate it with an implicit cue in the spatial domain. Specifically, we propose a neural contourlet network consisting of a convolutional neural network and a contourlet transform branch. In the encoder stage, we design a spatial-spectral fusion module to effectively fuse two types of cues. Contrary to the encoder, we employ the inverse contourlet transform with learned low-pass subbands and band-pass directional subbands to compose the depth in the decoder. Experiments on the three popular panoramic image datasets demonstrate that the proposed approach outperforms the state-of-the-art schemes with faster convergence. Code is available at https://github.com/zhijieshen-bjtu/Neural-Contourlet-Network-for-MODE.Comment: IEEE Transactions on Circuits and Systems for Video Technolog
    corecore