Although recent point cloud analysis achieves impressive progress, the
paradigm of representation learning from a single modality gradually meets its
bottleneck. In this work, we take a step towards more discriminative 3D point
cloud representation by fully taking advantages of images which inherently
contain richer appearance information, e.g., texture, color, and shade.
Specifically, this paper introduces a simple but effective point cloud
cross-modality training (PointCMT) strategy, which utilizes view-images, i.e.,
rendered or projected 2D images of the 3D object, to boost point cloud
analysis. In practice, to effectively acquire auxiliary knowledge from view
images, we develop a teacher-student framework and formulate the cross modal
learning as a knowledge distillation problem. PointCMT eliminates the
distribution discrepancy between different modalities through novel feature and
classifier enhancement criteria and avoids potential negative transfer
effectively. Note that PointCMT effectively improves the point-only
representation without architecture modification. Sufficient experiments verify
significant gains on various datasets using appealing backbones, i.e., equipped
with PointCMT, PointNet++ and PointMLP achieve state-of-the-art performance on
two benchmarks, i.e., 94.4% and 86.7% accuracy on ModelNet40 and ScanObjectNN,
respectively. Code will be made available at
https://github.com/ZhanHeshen/PointCMT.Comment: To appear in NIPS202