The goal of 3D mesh watermarking is to embed the message in 3D meshes that
can withstand various attacks imperceptibly and reconstruct the message
accurately from watermarked meshes. Traditional methods are less robust against
attacks. Recent DNN-based methods either introduce excessive distortions or
fail to embed the watermark without the help of texture information. However,
embedding the watermark in textures is insecure because replacing the texture
image can completely remove the watermark. In this paper, we propose a robust
deep 3D mesh watermarking WM-NET, which leverages attention-based convolutions
in watermarking tasks to embed binary messages in vertex distributions without
texture assistance. Furthermore, our WM-NET exploits the property that
simplified meshes inherit similar relations from the original ones, where the
relation is the offset vector directed from one vertex to its neighbor. By
doing so, our method can be trained on simplified meshes(limited data) but
remains effective on large-sized meshes (size adaptable) and unseen categories
of meshes (geometry adaptable). Extensive experiments demonstrate our method
brings 50% fewer distortions and 10% higher bit accuracy compared to previous
work. Our watermark WM-NET is robust against various mesh attacks, e.g. Gauss,
rotation, translation, scaling, and cropping