36 research outputs found

    SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning

    Full text link
    Recent years have witnessed significant success in Self-Supervised Learning (SSL), which facilitates various downstream tasks. However, attackers may steal such SSL models and commercialize them for profit, making it crucial to protect their Intellectual Property (IP). Most existing IP protection solutions are designed for supervised learning models and cannot be used directly since they require that the models' downstream tasks and target labels be known and available during watermark embedding, which is not always possible in the domain of SSL. To address such a problem especially when downstream tasks are diverse and unknown during watermark embedding, we propose a novel black-box watermarking solution, named SSL-WM, for protecting the ownership of SSL models. SSL-WM maps watermarked inputs by the watermarked encoders into an invariant representation space, which causes any downstream classifiers to produce expected behavior, thus allowing the detection of embedded watermarks. We evaluate SSL-WM on numerous tasks, such as Computer Vision (CV) and Natural Language Processing (NLP), using different SSL models, including contrastive-based and generative-based. Experimental results demonstrate that SSL-WM can effectively verify the ownership of stolen SSL models in various downstream tasks. Furthermore, SSL-WM is robust against model fine-tuning and pruning attacks. Lastly, SSL-WM can also evade detection from evaluated watermark detection approaches, demonstrating its promising application in protecting the IP of SSL models

    The Visual Computer manuscript No. (will be inserted by the editor)

    No full text
    Abstract We present a novel approach to render low resolution point clouds with multiple high resolution textures— the type of data typical from passive vision systems. The low precision, noisy, and sometimes incomplete nature of such data sets is not suitable for existing point-based rendering techniques that are designed to work with high precision and high density point clouds. Our new algorithm— View-dependent Textured Splatting (VDTS)—combines traditional splatting with a view-dependent texturing strategy to reduce rendering artifacts caused by imprecision or noise in the input data. VDTS requires no pre-processing of input data, addresses texture aliasing, and most importantly, processes texture visibility on the fly. The combination of these characteristics lends VDTS well for interactive rendering of dynamic scenes. Towards this end, we present a real-time view acquisition and rendering system to demonstrate the effectiveness of VDTS. In addition, we show that VDTS can produce high quality rendering when the texture images are augmented with per-pixel depth. In this scenario, VDTS is a reasonable alternative for interactive rendering of large CG models. Keywords Point rendering · Picture/Image generation · Multi-Texture · Real-tim
    corecore