59 research outputs found
Practical Cross-system Shilling Attacks with Limited Access to Data
In shilling attacks, an adversarial party injects a few fake user profiles
into a Recommender System (RS) so that the target item can be promoted or
demoted. Although much effort has been devoted to developing shilling attack
methods, we find that existing approaches are still far from practical. In this
paper, we analyze the properties a practical shilling attack method should have
and propose a new concept of Cross-system Attack. With the idea of Cross-system
Attack, we design a Practical Cross-system Shilling Attack (PC-Attack)
framework that requires little information about the victim RS model and the
target RS data for conducting attacks. PC-Attack is trained to capture graph
topology knowledge from public RS data in a self-supervised manner. Then, it is
fine-tuned on a small portion of target data that is easy to access to
construct fake profiles. Extensive experiments have demonstrated the
superiority of PC-Attack over state-of-the-art baselines. Our implementation of
PC-Attack is available at https://github.com/KDEGroup/PC-Attack.Comment: Accepted by AAAI 202
Link-aware semi-supervised hypergraph
Abstract(#br)Hypergraph learning has been widely applied to various learning tasks. To ensure learning accuracy, it is essential to construct an informative hypergraph structure that effectively modulates data correlations. However, existing hypergraph construction methods essentially resort to an unsupervised learning paradigm, which ignores supervisory information, such as pairwise links/non-links. In this article, to exploit the supervisory information, we propose a novel link-aware hypergraph learning model, which modulates high-order correlations of data samples in a semi-supervised manner. To construct a hypergraph, a coefficients matrix of the entire dataset is first calculated by solving a linear regression problem. Then, pairwise link constraints are exploited and propagated to the unconstrained samples, upon which the coefficients matrix is adjusted accordingly. Finally, the adjusted coefficients are used to generate a set of the hyperedges, as well as calculate the corresponding weights. We have validated the proposed link-aware semi-supervised hypergraph model on the problem of image clustering. Superior performance over the state-of-the-art methods demonstrates the effectiveness of the proposed hypergraph model
Cycle Encoding of a StyleGAN Encoder for Improved Reconstruction and Editability
GAN inversion aims to invert an input image into the latent space of a
pre-trained GAN. Despite the recent advances in GAN inversion, there remain
challenges to mitigate the tradeoff between distortion and editability, i.e.
reconstructing the input image accurately and editing the inverted image with a
small visual quality drop. The recently proposed pivotal tuning model makes
significant progress towards reconstruction and editability, by using a
two-step approach that first inverts the input image into a latent code, called
pivot code, and then alters the generator so that the input image can be
accurately mapped into the pivot code. Here, we show that both reconstruction
and editability can be improved by a proper design of the pivot code. We
present a simple yet effective method, named cycle encoding, for a high-quality
pivot code. The key idea of our method is to progressively train an encoder in
varying spaces according to a cycle scheme: W->W+->W. This training methodology
preserves the properties of both W and W+ spaces, i.e. high editability of W
and low distortion of W+. To further decrease the distortion, we also propose
to refine the pivot code with an optimization-based method, where a
regularization term is introduced to reduce the degradation in editability.
Qualitative and quantitative comparisons to several state-of-the-art methods
demonstrate the superiority of our approach
Knowledge Condensation Distillation
Knowledge Distillation (KD) transfers the knowledge from a high-capacity
teacher network to strengthen a smaller student. Existing methods focus on
excavating the knowledge hints and transferring the whole knowledge to the
student. However, the knowledge redundancy arises since the knowledge shows
different values to the student at different learning stages. In this paper, we
propose Knowledge Condensation Distillation (KCD). Specifically, the knowledge
value on each sample is dynamically estimated, based on which an
Expectation-Maximization (EM) framework is forged to iteratively condense a
compact knowledge set from the teacher to guide the student learning. Our
approach is easy to build on top of the off-the-shelf KD methods, with no extra
training parameters and negligible computation overhead. Thus, it presents one
new perspective for KD, in which the student that actively identifies teacher's
knowledge in line with its aptitude can learn to learn more effectively and
efficiently. Experiments on standard benchmarks manifest that the proposed KCD
can well boost the performance of student model with even higher distillation
efficiency. Code is available at https://github.com/dzy3/KCD.Comment: ECCV202
Privacy-Preserving Face Recognition with Learnable Privacy Budgets in Frequency Domain
Face recognition technology has been used in many fields due to its high
recognition accuracy, including the face unlocking of mobile devices, community
access control systems, and city surveillance. As the current high accuracy is
guaranteed by very deep network structures, facial images often need to be
transmitted to third-party servers with high computational power for inference.
However, facial images visually reveal the user's identity information. In this
process, both untrusted service providers and malicious users can significantly
increase the risk of a personal privacy breach. Current privacy-preserving
approaches to face recognition are often accompanied by many side effects, such
as a significant increase in inference time or a noticeable decrease in
recognition accuracy. This paper proposes a privacy-preserving face recognition
method using differential privacy in the frequency domain. Due to the
utilization of differential privacy, it offers a guarantee of privacy in
theory. Meanwhile, the loss of accuracy is very slight. This method first
converts the original image to the frequency domain and removes the direct
component termed DC. Then a privacy budget allocation method can be learned
based on the loss of the back-end face recognition network within the
differential privacy framework. Finally, it adds the corresponding noise to the
frequency domain features. Our method performs very well with several classical
face recognition test sets according to the extensive experiments.Comment: ECCV 2022; Code is available at
https://github.com/Tencent/TFace/tree/master/recognition/tasks/dctd
- …