Article thumbnail

MANet: Multimodal Attention Network based Point- View fusion for 3D Shape Recognition

By Yaxin Zhao, Jichao Jiao and Tangkun Zhang

Abstract

3D shape recognition has attracted more and more attention as a task of 3D vision research. The proliferation of 3D data encourages various deep learning methods based on 3D data. Now there have been many deep learning models based on point-cloud data or multi-view data alone. However, in the era of big data, integrating data of two different modals to obtain a unified 3D shape descriptor is bound to improve the recognition accuracy. Therefore, this paper proposes a fusion network based on multimodal attention mechanism for 3D shape recognition. Considering the limitations of multi-view data, we introduce a soft attention scheme, which can use the global point-cloud features to filter the multi-view features, and then realize the effective fusion of the two features. More specifically, we obtain the enhanced multi-view features by mining the contribution of each multi-view image to the overall shape recognition, and then fuse the point-cloud features and the enhanced multi-view features to obtain a more discriminative 3D shape descriptor. We have performed relevant experiments on the ModelNet40 dataset, and experimental results verify the effectiveness of our method.Comment: 8 pages,6 figure

Topics: Computer Science - Computer Vision and Pattern Recognition
Year: 2020
OAI identifier: oai:arXiv.org:2002.12573

Suggested articles


To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.