90 research outputs found
Towards Privacy-Preserving and Verifiable Federated Matrix Factorization
Recent years have witnessed the rapid growth of federated learning (FL), an
emerging privacy-aware machine learning paradigm that allows collaborative
learning over isolated datasets distributed across multiple participants. The
salient feature of FL is that the participants can keep their private datasets
local and only share model updates. Very recently, some research efforts have
been initiated to explore the applicability of FL for matrix factorization
(MF), a prevalent method used in modern recommendation systems and services. It
has been shown that sharing the gradient updates in federated MF entails
privacy risks on revealing users' personal ratings, posing a demand for
protecting the shared gradients. Prior art is limited in that they incur
notable accuracy loss, or rely on heavy cryptosystem, with a weak threat model
assumed. In this paper, we propose VPFedMF, a new design aimed at
privacy-preserving and verifiable federated MF. VPFedMF provides for federated
MF guarantees on the confidentiality of individual gradient updates through
lightweight and secure aggregation. Moreover, VPFedMF ambitiously and newly
supports correctness verification of the aggregation results produced by the
coordinating server in federated MF. Experiments on a real-world moving rating
dataset demonstrate the practical performance of VPFedMF in terms of
computation, communication, and accuracy
Regional Chemotherapy and Brachytherapy for Malignant Glioma – Clinical Experience and Serial Experiments
TransCAB: Transferable Clean-Annotation Backdoor to Object Detection with Natural Trigger in Real-World
Object detection is the foundation of various critical computer-vision tasks
such as segmentation, object tracking, and event detection. To train an object
detector with satisfactory accuracy, a large amount of data is required.
However, due to the intensive workforce involved with annotating large
datasets, such a data curation task is often outsourced to a third party or
relied on volunteers. This work reveals severe vulnerabilities of such data
curation pipeline. We propose MACAB that crafts clean-annotated images to
stealthily implant the backdoor into the object detectors trained on them even
when the data curator can manually audit the images. We observe that the
backdoor effect of both misclassification and the cloaking are robustly
achieved in the wild when the backdoor is activated with inconspicuously
natural physical triggers. Backdooring non-classification object detection with
clean-annotation is challenging compared to backdooring existing image
classification tasks with clean-label, owing to the complexity of having
multiple objects within each frame, including victim and non-victim objects.
The efficacy of the MACAB is ensured by constructively i abusing the
image-scaling function used by the deep learning framework, ii incorporating
the proposed adversarial clean image replica technique, and iii combining
poison data selection criteria given constrained attacking budget. Extensive
experiments demonstrate that MACAB exhibits more than 90% attack success rate
under various real-world scenes. This includes both cloaking and
misclassification backdoor effect even restricted with a small attack budget.
The poisoned samples cannot be effectively identified by state-of-the-art
detection techniques.The comprehensive video demo is at
https://youtu.be/MA7L_LpXkp4, which is based on a poison rate of 0.14% for
YOLOv4 cloaking backdoor and Faster R-CNN misclassification backdoor
RBNN: Memory-Efficient Reconfigurable Deep Binary Neural Network with IP Protection for Internet of Things
Though deep neural network models exhibit outstanding performance for various
applications, their large model size and extensive floating-point operations
render deployment on mobile computing platforms a major challenge, and, in
particular, on Internet of Things devices. One appealing solution is model
quantization that reduces the model size and uses integer operations commonly
supported by microcontrollers . To this end, a 1-bit quantized DNN model or
deep binary neural network maximizes the memory efficiency, where each
parameter in a BNN model has only 1-bit. In this paper, we propose a
reconfigurable BNN (RBNN) to further amplify the memory efficiency for
resource-constrained IoT devices. Generally, the RBNN can be reconfigured on
demand to achieve any one of M (M>1) distinct tasks with the same parameter
set, thus only a single task determines the memory requirements. In other
words, the memory utilization is improved by times M. Our extensive experiments
corroborate that up to seven commonly used tasks can co-exist (the value of M
can be larger). These tasks with a varying number of classes have no or
negligible accuracy drop-off on three binarized popular DNN architectures
including VGG, ResNet, and ReActNet. The tasks span across different domains,
e.g., computer vision and audio domains validated herein, with the prerequisite
that the model architecture can serve those cross-domain tasks. To protect the
intellectual property of an RBNN model, the reconfiguration can be controlled
by both a user key and a device-unique root key generated by the intrinsic
hardware fingerprint. By doing so, an RBNN model can only be used per paid user
per authorized device, thus benefiting both the user and the model provider
Asymmetric Trapdoor Pseudorandom Generators: Definitions, Constructions, and Applications to Homomorphic Signatures with Shorter Public Keys
We introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers for the users having no knowledge of the public trapdoor and the secret trapdoor; so this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number to recover the whole sequence; so this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use sequence to generate a sequence of the secret pseudorandom numbers. ATPRG can help design more space-efficient protocols where data/input/message should respect a predefined (unchangeable) order to be correctly processed in a computation or malleable cryptographic system.
As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only that is independent of the dataset size. As a comparison, the shortest size of the existing public key is , proposed by Catalano et al. (CRYPTO\u2715), where is the dataset size and is the dimension of the message. In other words, we provide the first homomorphic signature scheme with -sized public keys for the one-dimension messages
- …