3,366 research outputs found
10 Security and Privacy Problems in Self-Supervised Learning
Self-supervised learning has achieved revolutionary progress in the past
several years and is commonly believed to be a promising approach for
general-purpose AI. In particular, self-supervised learning aims to pre-train
an encoder using a large amount of unlabeled data. The pre-trained encoder is
like an "operating system" of the AI ecosystem. Specifically, the encoder can
be used as a feature extractor for many downstream tasks with little or no
labeled training data. Existing studies on self-supervised learning mainly
focused on pre-training a better encoder to improve its performance on
downstream tasks in non-adversarial settings, leaving its security and privacy
in adversarial settings largely unexplored. A security or privacy issue of a
pre-trained encoder leads to a single point of failure for the AI ecosystem. In
this book chapter, we discuss 10 basic security and privacy problems for the
pre-trained encoders in self-supervised learning, including six confidentiality
problems, three integrity problems, and one availability problem. For each
problem, we discuss potential opportunities and challenges. We hope our book
chapter will inspire future research on the security and privacy of
self-supervised learning.Comment: A book chapte
SATR: Zero-Shot Semantic Segmentation of 3D Shapes
We explore the task of zero-shot semantic segmentation of 3D shapes by using
large-scale off-the-shelf 2D image recognition models. Surprisingly, we find
that modern zero-shot 2D object detectors are better suited for this task than
contemporary text/image similarity predictors or even zero-shot 2D segmentation
networks. Our key finding is that it is possible to extract accurate 3D
segmentation maps from multi-view bounding box predictions by using the
topological properties of the underlying surface. For this, we develop the
Segmentation Assignment with Topological Reweighting (SATR) algorithm and
evaluate it on two challenging benchmarks: FAUST and ShapeNetPart. On these
datasets, SATR achieves state-of-the-art performance and outperforms prior work
by at least 22\% on average in terms of mIoU. Our source code and data will be
publicly released. Project webpage: https://samir55.github.io/SATR/Comment: Project webpage: https://samir55.github.io/SATR
- …