67 research outputs found
Membership Inference Attacks and Defenses in Neural Network Pruning
Neural network pruning has been an essential technique to reduce the
computation and memory requirements for using deep neural networks for
resource-constrained devices. Most existing research focuses primarily on
balancing the sparsity and accuracy of a pruned neural network by strategically
removing insignificant parameters and retraining the pruned model. Such efforts
on reusing training samples pose serious privacy risks due to increased
memorization, which, however, has not been investigated yet.
In this paper, we conduct the first analysis of privacy risks in neural
network pruning. Specifically, we investigate the impacts of neural network
pruning on training data privacy, i.e., membership inference attacks. We first
explore the impact of neural network pruning on prediction divergence, where
the pruning process disproportionately affects the pruned model's behavior for
members and non-members. Meanwhile, the influence of divergence even varies
among different classes in a fine-grained manner. Enlighten by such divergence,
we proposed a self-attention membership inference attack against the pruned
neural networks. Extensive experiments are conducted to rigorously evaluate the
privacy impacts of different pruning approaches, sparsity levels, and adversary
knowledge. The proposed attack shows the higher attack performance on the
pruned models when compared with eight existing membership inference attacks.
In addition, we propose a new defense mechanism to protect the pruning process
by mitigating the prediction divergence based on KL-divergence distance, whose
effectiveness has been experimentally demonstrated to effectively mitigate the
privacy risks while maintaining the sparsity and accuracy of the pruned models.Comment: This paper has been accepted to USENIX Security Symposium 2022. This
is an extended version with more experimental result
PATROL: Privacy-Oriented Pruning for Collaborative Inference Against Model Inversion Attacks
Collaborative inference has been a promising solution to enable
resource-constrained edge devices to perform inference using state-of-the-art
deep neural networks (DNNs). In collaborative inference, the edge device first
feeds the input to a partial DNN locally and then uploads the intermediate
result to the cloud to complete the inference. However, recent research
indicates model inversion attacks (MIAs) can reconstruct input data from
intermediate results, posing serious privacy concerns for collaborative
inference. Existing perturbation and cryptography techniques are inefficient
and unreliable in defending against MIAs while performing accurate inference.
This paper provides a viable solution, named PATROL, which develops
privacy-oriented pruning to balance privacy, efficiency, and utility of
collaborative inference. PATROL takes advantage of the fact that later layers
in a DNN can extract more task-specific features. Given limited local resources
for collaborative inference, PATROL intends to deploy more layers at the edge
based on pruning techniques to enforce task-specific features for inference and
reduce task-irrelevant but sensitive features for privacy preservation. To
achieve privacy-oriented pruning, PATROL introduces two key components:
Lipschitz regularization and adversarial reconstruction training, which
increase the reconstruction errors by reducing the stability of MIAs and
enhance the target inference model by adversarial training, respectively
Fed-CPrompt: Contrastive Prompt for Rehearsal-Free Federated Continual Learning
Federated continual learning (FCL) learns incremental tasks over time from
confidential datasets distributed across clients. This paper focuses on
rehearsal-free FCL, which has severe forgetting issues when learning new tasks
due to the lack of access to historical task data. To address this issue, we
propose Fed-CPrompt based on prompt learning techniques to obtain task-specific
prompts in a communication-efficient way. Fed-CPrompt introduces two key
components, asynchronous prompt learning, and contrastive continual loss, to
handle asynchronous task arrival and heterogeneous data distributions in FCL,
respectively. Extensive experiments demonstrate the effectiveness of
Fed-CPrompt in achieving SOTA rehearsal-free FCL performance.Comment: Accepted by FL-ICML 202
Distributed Pruning Towards Tiny Neural Networks in Federated Learning
Neural network pruning is an essential technique for reducing the size and
complexity of deep neural networks, enabling large-scale models on devices with
limited resources. However, existing pruning approaches heavily rely on
training data for guiding the pruning strategies, making them ineffective for
federated learning over distributed and confidential datasets. Additionally,
the memory- and computation-intensive pruning process becomes infeasible for
recourse-constrained devices in federated learning. To address these
challenges, we propose FedTiny, a distributed pruning framework for federated
learning that generates specialized tiny models for memory- and
computing-constrained devices. We introduce two key modules in FedTiny to
adaptively search coarse- and finer-pruned specialized models to fit deployment
scenarios with sparse and cheap local computation. First, an adaptive batch
normalization selection module is designed to mitigate biases in pruning caused
by the heterogeneity of local data. Second, a lightweight progressive pruning
module aims to finer prune the models under strict memory and computational
budgets, allowing the pruning policy for each layer to be gradually determined
rather than evaluating the overall model structure. The experimental results
demonstrate the effectiveness of FedTiny, which outperforms state-of-the-art
approaches, particularly when compressing deep models to extremely sparse tiny
models. FedTiny achieves an accuracy improvement of 2.61% while significantly
reducing the computational cost by 95.91% and the memory footprint by 94.01%
compared to state-of-the-art methods.Comment: This paper has been accepted to ICDCS 202
Improving channel resilience for task-oriented semantic communications: A unified information bottleneck approach
Task-oriented semantic communications (TSC) enhance radio resource efficiency by transmitting task-relevant semantic information. However, current research often overlooks the inherent semantic distinctions among encoded features. Due to unavoidable channel variations from time and frequency-selective fading, semantically sensitive feature units could be more susceptible to erroneous inference if corrupted by dynamic channels. Therefore, this letter introduces a unified channel-resilient TSC framework via information bottleneck. This framework complements existing TSC approaches by controlling information flow to capture fine-grained feature-level semantic robustness. Experiments on a case study for real-time subchannel allocation validate the framework’s effectiveness
Graphene/silicon heterojunction for reconfigurable phase-relevant activation function in coherent optical neural networks
Optical neural networks (ONNs) herald a new era in information and
communication technologies and have implemented various intelligent
applications. In an ONN, the activation function (AF) is a crucial component
determining the network performances and on-chip AF devices are still in
development. Here, we first demonstrate on-chip reconfigurable AF devices with
phase activation fulfilled by dual-functional graphene/silicon (Gra/Si)
heterojunctions. With optical modulation and detection in one device, time
delays are shorter, energy consumption is lower, reconfigurability is higher
and the device footprint is smaller than other on-chip AF strategies. The
experimental modulation voltage (power) of our Gra/Si heterojunction achieves
as low as 1 V (0.5 mW), superior to many pure silicon counterparts. In the
photodetection aspect, a high responsivity of over 200 mA/W is realized.
Special nonlinear functions generated are fed into a complex-valued ONN to
challenge handwritten letters and image recognition tasks, showing improved
accuracy and potential of high-efficient, all-component-integration on-chip
ONN. Our results offer new insights for on-chip ONN devices and pave the way to
high-performance integrated optoelectronic computing circuits
Repetitive transcranial magnetic stimulation of the dorsolateral prefrontal cortex modulates electroencephalographic functional connectivity in Alzheimer’s disease
Background: Increasing evidence demonstrates that repetitive transcranial magnetic stimulation (rTMS) treatment of the dorsolateral prefrontal cortex is beneficial for improving cognitive function in patients with Alzheimer’s disease (AD); however, the underlying mechanism of its therapeutic effect remains unclear. Objectives/Hypothesis: The aim of this study was to investigate the impact of rTMS to the dorsolateral prefrontal cortex on functional connectivity along with treatment response in AD patients with different severity of cognitive impairment. Methods: We conducted a 2-week treatment course of 10-Hz rTMS over the left dorsolateral prefrontal cortex in 23 patients with AD who were split into the mild or moderate cognitive impairment subgroup. Resting state electroencephalography and general cognition was assessed before and after rTMS. Power envelope connectivity was used to calculate functional connectivity at the source level. The functional connectivity of AD patients and 11 cognitively normal individuals was compared. Results: Power envelope connectivity was higher in the delta and theta bands but lower in the beta band in the moderate cognitive impairment group, compared to the cognitively normal controls, at baseline (p < 0.05). The mild cognitive impairment group had no significant abnormities. Montreal Cognitive Assessment scores improved after rTMS in the moderate and mild cognitive impairment groups. Power envelope connectivity in the beta band post-rTMS was increased in the moderate group (p < 0.05) but not in the mild group. No significant changes in the delta and theta band were found after rTMS in both the moderate and mild group. Conclusion: High-frequency rTMS to the dorsolateral prefrontal cortex modulates electroencephalographic functional connectivity while improving cognitive function in patients with AD. Increased beta connectivity may have an important mechanistic role in rTMS therapeutic effects.Yi Guo, Ge Dang, Brenton Hordacre, Xiaolin Su, Nan Yan, Siyan Chen, Huixia Ren, Xue Shi, Min Cai, Sirui Zhang and Xiaoyong La
Functional connectivity changes are correlated with sleep improvement in chronic insomnia patients after rTMS treatment
BackgroundRepetitive transcranial magnetic stimulation (rTMS) has been increasingly used as a treatment modality for chronic insomnia disorder (CID). However, our understanding of the mechanisms underlying the efficacy of rTMS is limited.ObjectiveThis study aimed to investigate rTMS-induced alterations in resting-state functional connectivity and to find potential connectivity biomarkers for predicting and tracking clinical outcomes after rTMS.MethodsThirty-seven patients with CID received a 10-session low frequency rTMS treatment applied to the right dorsolateral prefrontal cortex. Before and after treatment, the patients underwent resting-state electroencephalography recordings and a sleep quality assessment using the Pittsburgh Sleep Quality Index (PSQI).ResultsAfter treatment, rTMS significantly increased the connectivity of 34 connectomes in the lower alpha frequency band (8–10 Hz). Additionally, alterations in functional connectivity between the left insula and the left inferior eye junction, as well as between the left insula and medial prefrontal cortex, were associated with a decrease in PSQI score. Further, the correlation between the functional connectivity and PSQI persisted 1 month after the completion of rTMS as evidenced by subsequent electroencephalography (EEG) recordings and the PSQI assessment.ConclusionBased on these results, we established a link between alterations in functional connectivity and clinical outcomes of rTMS, which suggested that EEG-derived functional connectivity changes were associated with clinical improvement of rTMS in treating CID. These findings provide preliminary evidence that rTMS may improve insomnia symptoms by modifying functional connectivity, which can be used to inform prospective clinical trials and potentially for treatment optimization
Large expert-curated database for benchmarking document similarity detection in biomedical literature search
Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe
- …