5 research outputs found
DPD-fVAE: Synthetic Data Generation Using Federated Variational Autoencoders With Differentially-Private Decoder
Federated learning (FL) is getting increased attention for processing
sensitive, distributed datasets common to domains such as healthcare. Instead
of directly training classification models on these datasets, recent works have
considered training data generators capable of synthesising a new dataset which
is not protected by any privacy restrictions. Thus, the synthetic data can be
made available to anyone, which enables further evaluation of machine learning
architectures and research questions off-site. As an additional layer of
privacy-preservation, differential privacy can be introduced into the training
process. We propose DPD-fVAE, a federated Variational Autoencoder with
Differentially-Private Decoder, to synthesise a new, labelled dataset for
subsequent machine learning tasks. By synchronising only the decoder component
with FL, we can reduce the privacy cost per epoch and thus enable better data
generators. In our evaluation on MNIST, Fashion-MNIST and CelebA, we show the
benefits of DPD-fVAE and report competitive performance to related work in
terms of Fr\'echet Inception Distance and accuracy of classifiers trained on
the synthesised dataset
Computational approaches to alleviate alarm fatigue in intensive care medicine: A systematic literature review
Patient monitoring technology has been used to guide therapy and alert staff when a vital sign leaves a predefined range in the intensive care unit (ICU) for decades. However, large amounts of technically false or clinically irrelevant alarms provoke alarm fatigue in staff leading to desensitisation towards critical alarms. With this systematic review, we are following the Preferred Reporting Items for Systematic Reviews (PRISMA) checklist in order to summarise scientific efforts that aimed to develop IT systems to reduce alarm fatigue in ICUs. 69 peer-reviewed publications were included. The majority of publications targeted the avoidance of technically false alarms, while the remainder focused on prediction of patient deterioration or alarm presentation. The investigated alarm types were mostly associated with heart rate or arrhythmia, followed by arterial blood pressure, oxygen saturation, and respiratory rate. Most publications focused on the development of software solutions, some on wearables, smartphones, or headmounted displays for delivering alarms to staff. The most commonly used statistical models were tree-based. In conclusion, we found strong evidence that alarm fatigue can be alleviated by IT-based solutions. However, future efforts should focus more on the avoidance of clinically non-actionable alarms which could be accelerated by improving the data availability
Defending against Reconstruction Attacks through Differentially Private Federated Learning for Classification of Heterogeneous Chest X-ray Data
Privacy regulations and the physical distribution of heterogeneous data are often primary concerns for the development of deep learning models in a medical context. This paper evaluates the feasibility of differentially private federated learning for chest X-ray classification as a defense against data privacy attacks. To the best of our knowledge, we are the first to directly compare the impact of differentially private training on two different neural network architectures, DenseNet121 and ResNet50. Extending the federated learning environments previously analyzed in terms of privacy, we simulated a heterogeneous and imbalanced federated setting by distributing images from the public CheXpert and Mendeley chest X-ray datasets unevenly among 36 clients. Both non-private baseline models achieved an area under the receiver operating characteristic curve (AUC) of 0.94 on the binary classification task of detecting the presence of a medical finding. We demonstrate that both model architectures are vulnerable to privacy violation by applying image reconstruction attacks to local model updates from individual clients. The attack was particularly successful during later training stages. To mitigate the risk of a privacy breach, we integrated Rényi differential privacy with a Gaussian noise mechanism into local model training. We evaluate model performance and attack vulnerability for privacy budgets ε∈{1,3,6,10}. The DenseNet121 achieved the best utility-privacy trade-off with an AUC of 0.94 for ε=6. Model performance deteriorated slightly for individual clients compared to the non-private baseline. The ResNet50 only reached an AUC of 0.76 in the same privacy setting. Its performance was inferior to that of the DenseNet121 for all considered privacy constraints, suggesting that the DenseNet121 architecture is more robust to differentially private training
Differentially Private Federated Learning for Anomaly Detection in eHealth Networks
<p>Increasing number of ubiquitous devices are being used in the medical field to collect patient information. Those connected sensors can potentially be exploited by third parties who want to misuse personal information and compromise the security, which could ultimately result even in patient death. This paper addresses the security concerns in eHealth networks and suggests a new approach to dealing with anomalies. In particular we propose a concept for safe in-hospital learning from internet of health things (IoHT) device data while securing the network traffic with a collaboratively trained anomaly detection system using federated learning. That way, real time traffic anomaly detection is achieved, while maintaining collaboration between hospitals and keeping local data secure and private. Since not only the network metadata, but also the actual medical data is relevant to anomaly detection, we propose to use differential privacy (DP) for providing formal guarantees of the privacy spending accumulated during the federated learning.</p>