648 research outputs found
Preserving Data Privacy for ML-driven Applications in Open Radio Access Networks
Deep learning offers a promising solution to improve spectrum access
techniques by utilizing data-driven approaches to manage and share limited
spectrum resources for emerging applications. For several of these
applications, the sensitive wireless data (such as spectrograms) are stored in
a shared database or multistakeholder cloud environment and are therefore prone
to privacy leaks. This paper aims to address such privacy concerns by examining
the representative case study of shared database scenarios in 5G Open Radio
Access Network (O-RAN) networks where we have a shared database within the
near-real-time (near-RT) RAN intelligent controller. We focus on securing the
data that can be used by machine learning (ML) models for spectrum sharing and
interference mitigation applications without compromising the model and network
performances. The underlying idea is to leverage a (i) Shuffling-based
learnable encryption technique to encrypt the data, following which, (ii)
employ a custom Vision transformer (ViT) as the trained ML model that is
capable of performing accurate inferences on such encrypted data. The paper
offers a thorough analysis and comparisons with analogous convolutional neural
networks (CNN) as well as deeper architectures (such as ResNet-50) as
baselines. Our experiments showcase that the proposed approach significantly
outperforms the baseline CNN with an improvement of 24.5% and 23.9% for the
percent accuracy and F1-Score respectively when operated on encrypted data.
Though deeper ResNet-50 architecture is obtained as a slightly more accurate
model, with an increase of 4.4%, the proposed approach boasts a reduction of
parameters by 99.32%, and thus, offers a much-improved prediction time by
nearly 60%
Human-imperceptible, Machine-recognizable Images
Massive human-related data is collected to train neural networks for computer
vision tasks. A major conflict is exposed relating to software engineers
between better developing AI systems and distancing from the sensitive training
data. To reconcile this conflict, this paper proposes an efficient
privacy-preserving learning paradigm, where images are first encrypted to
become ``human-imperceptible, machine-recognizable'' via one of the two
encryption strategies: (1) random shuffling to a set of equally-sized patches
and (2) mixing-up sub-patches of the images. Then, minimal adaptations are made
to vision transformer to enable it to learn on the encrypted images for vision
tasks, including image classification and object detection. Extensive
experiments on ImageNet and COCO show that the proposed paradigm achieves
comparable accuracy with the competitive methods. Decrypting the encrypted
images requires solving an NP-hard jigsaw puzzle or an ill-posed inverse
problem, which is empirically shown intractable to be recovered by various
attackers, including the powerful vision transformer-based attacker. We thus
show that the proposed paradigm can ensure the encrypted images have become
human-imperceptible while preserving machine-recognizable information. The code
is available at \url{https://github.com/FushengHao/PrivacyPreservingML.
- …