79 research outputs found
On the reduction of oxygen from dispersed medium
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2007.Includes bibliographical references (p. 105-116).The reduction of oxygen from an organic phase dispersed in a concentrated electrolyte is investigated. Dispersed organic phases are used to enhance oxygen transport in fermenters and artificial blood substitutes. This work evaluates the feasibility of using a dispersed organic phase to transport oxygen in a fuel cell. An emulsion of perfluorohexane in a 20 wt% potassium hydroxide solution was formed with a lecithin surfactant. Oxygen was reduced from the emulsion on a rotating disk electrode. The dispersed phase did not contribute to the oxygen transport to the surface of a rotating disk electrode. An explanation is given based on the hydrodynamics of an emulsion under a rotating disk electrode. To eliminate the effect of hydrodynamics, the results of a hydrostatic transient diffusion experiment (Cottrell Experiment) are reported. Again, no significant enhancement of the oxygen transport rate was observed. The dispersed phase is shown to contain oxygen by NMR spectroscopy. It is argued that the expectation of an enhancement from the use of a dispersed phase may be based on inapplicable transport models. The presence of the lecithin surfactant may also impede transport. An oscillating electrode is used to reduce oxygen from a continuous perfluorohexane phase. In this case, the rate of reduction of oxygen is limited by diffusion across an aqueous layer trapped at the surface of the electrode by its relative affinity for aqueous solution over perfluorohexane. The implications for the use of a dispersed organic phase in fuel cells are discussed. The use of a rotating disk electrode in heterogeneous media and the need for a mass transport model in liquid-liquid dispersions are also discussed.by Omar H. Roushdy.Ph.D
How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?
Federated learning (FL) has attracted growing interest for enabling
privacy-preserving machine learning on data stored at multiple users while
avoiding moving the data off-device. However, while data never leaves users'
devices, privacy still cannot be guaranteed since significant computations on
users' training data are shared in the form of trained local models. These
local models have recently been shown to pose a substantial privacy threat
through different privacy attacks such as model inversion attacks. As a remedy,
Secure Aggregation (SA) has been developed as a framework to preserve privacy
in FL, by guaranteeing the server can only learn the global aggregated model
update but not the individual model updates. While SA ensures no additional
information is leaked about the individual model update beyond the aggregated
model update, there are no formal guarantees on how much privacy FL with SA can
actually offer; as information about the individual dataset can still
potentially leak through the aggregated model computed at the server. In this
work, we perform a first analysis of the formal privacy guarantees for FL with
SA. Specifically, we use Mutual Information (MI) as a quantification metric and
derive upper bounds on how much information about each user's dataset can leak
through the aggregated model update. When using the FedSGD aggregation
algorithm, our theoretical bounds show that the amount of privacy leakage
reduces linearly with the number of users participating in FL with SA. To
validate our theoretical bounds, we use an MI Neural Estimator to empirically
evaluate the privacy leakage under different FL setups on both the MNIST and
CIFAR10 datasets. Our experiments verify our theoretical bounds for FedSGD,
which show a reduction in privacy leakage as the number of users and local
batch size grow, and an increase in privacy leakage with the number of training
rounds.Comment: Accepted to appear in Proceedings on Privacy Enhancing Technologies
(PoPETs) 202
LOKI: Large-scale Data Reconstruction Attack against Federated Learning through Model Manipulation
Federated learning was introduced to enable machine learning over large
decentralized datasets while promising privacy by eliminating the need for data
sharing. Despite this, prior work has shown that shared gradients often contain
private information and attackers can gain knowledge either through malicious
modification of the architecture and parameters or by using optimization to
approximate user data from the shared gradients. However, prior data
reconstruction attacks have been limited in setting and scale, as most works
target FedSGD and limit the attack to single-client gradients. Many of these
attacks fail in the more practical setting of FedAVG or if updates are
aggregated together using secure aggregation. Data reconstruction becomes
significantly more difficult, resulting in limited attack scale and/or
decreased reconstruction quality. When both FedAVG and secure aggregation are
used, there is no current method that is able to attack multiple clients
concurrently in a federated learning setting. In this work we introduce LOKI,
an attack that overcomes previous limitations and also breaks the anonymity of
aggregation as the leaked data is identifiable and directly tied back to the
clients they come from. Our design sends clients customized convolutional
parameters, and the weight gradients of data points between clients remain
separate even through aggregation. With FedAVG and aggregation across 100
clients, prior work can leak less than 1% of images on MNIST, CIFAR-100, and
Tiny ImageNet. Using only a single training round, LOKI is able to leak 76-86%
of all data samples.Comment: To appear in the IEEE Symposium on Security & Privacy (S&P) 202
The Resource Problem of Using Linear Layer Leakage Attack in Federated Learning
Secure aggregation promises a heightened level of privacy in federated
learning, maintaining that a server only has access to a decrypted aggregate
update. Within this setting, linear layer leakage methods are the only data
reconstruction attacks able to scale and achieve a high leakage rate regardless
of the number of clients or batch size. This is done through increasing the
size of an injected fully-connected (FC) layer. However, this results in a
resource overhead which grows larger with an increasing number of clients. We
show that this resource overhead is caused by an incorrect perspective in all
prior work that treats an attack on an aggregate update in the same way as an
individual update with a larger batch size. Instead, by attacking the update
from the perspective that aggregation is combining multiple individual updates,
this allows the application of sparsity to alleviate resource overhead. We show
that the use of sparsity can decrease the model size overhead by over
327 and the computation time by 3.34 compared to SOTA while
maintaining equivalent total leakage rate, 77% even with clients in
aggregation.Comment: Accepted to CVPR 202
Edge detection based on morphological amoebas
Detecting the edges of objects within images is critical for quality image
processing. We present an edge-detecting technique that uses morphological
amoebas that adjust their shape based on variation in image contours. We
evaluate the method both quantitatively and qualitatively for edge detection of
images, and compare it to classic morphological methods. Our amoeba-based
edge-detection system performed better than the classic edge detectors.Comment: To appear in The Imaging Science Journa
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
Transfer learning via fine-tuning pre-trained transformer models has gained
significant success in delivering state-of-the-art results across various NLP
tasks. In the absence of centralized data, Federated Learning (FL) can benefit
from distributed and private data of the FL edge clients for fine-tuning.
However, due to the limited communication, computation, and storage
capabilities of edge devices and the huge sizes of popular transformer models,
efficient fine-tuning is crucial to make federated training feasible. This work
explores the opportunities and challenges associated with applying parameter
efficient fine-tuning (PEFT) methods in different FL settings for language
tasks. Specifically, our investigation reveals that as the data across users
becomes more diverse, the gap between fully fine-tuning the model and employing
PEFT methods widens. To bridge this performance gap, we propose a method called
SLoRA, which overcomes the key limitations of LoRA in high heterogeneous data
scenarios through a novel data-driven initialization technique. Our
experimental results demonstrate that SLoRA achieves performance comparable to
full fine-tuning, with significant sparse updates with approximately
density while reducing training time by up to
A Novel Classification of Lung Cancer into Molecular Subtypes
The remarkably heterogeneous nature of lung cancer has become more apparent over the last decade. In general, advanced lung cancer is an aggressive malignancy with a poor prognosis. The discovery of multiple molecular mechanisms underlying the development, progression, and prognosis of lung cancer, however, has created new opportunities for targeted therapy and improved outcome. In this paper, we define “molecular subtypes” of lung cancer based on specific actionable genetic aberrations. Each subtype is associated with molecular tests that define the subtype and drugs that may potentially treat it. We hope this paper will be a useful guide to clinicians and researchers alike by assisting in therapy decision making and acting as a platform for further study. In this new era of cancer treatment, the ‘one-size-fits-all’ paradigm is being forcibly pushed aside—allowing for more effective, personalized oncologic care to emerge
Global impact of COVID-19 on stroke care and IV thrombolysis
Objective To measure the global impact of COVID-19 pandemic on volumes of IV thrombolysis (IVT), IVT transfers, and stroke hospitalizations over 4 months at the height of the pandemic (March 1 to June 30, 2020) compared with 2 control 4-month periods. Methods We conducted a cross-sectional, observational, retrospective study across 6 continents, 70 countries, and 457 stroke centers. Diagnoses were identified by their ICD-10 codes or classifications in stroke databases. Results There were 91,373 stroke admissions in the 4 months immediately before compared to 80,894 admissions during the pandemic months, representing an 11.5% (95% confidence interval [CI] -11.7 to -11.3, p < 0.0001) decline. There were 13,334 IVT therapies in the 4 months preceding compared to 11,570 procedures during the pandemic, representing a 13.2% (95% CI -13.8 to -12.7, p < 0.0001) drop. Interfacility IVT transfers decreased from 1,337 to 1,178, or an 11.9% decrease (95% CI -13.7 to -10.3, p = 0.001). Recovery of stroke hospitalization volume (9.5%, 95% CI 9.2-9.8, p < 0.0001) was noted over the 2 later (May, June) vs the 2 earlier (March, April) pandemic months. There was a 1.48% stroke rate across 119,967 COVID-19 hospitalizations. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection was noted in 3.3% (1,722/52,026) of all stroke admissions. Conclusions The COVID-19 pandemic was associated with a global decline in the volume of stroke hospitalizations, IVT, and interfacility IVT transfers. Primary stroke centers and centers with higher COVID-19 inpatient volumes experienced steeper declines. Recovery of stroke hospitalization was noted in the later pandemic months.Paroxysmal Cerebral Disorder
Nurses' perceptions of aids and obstacles to the provision of optimal end of life care in ICU
Contains fulltext :
172380.pdf (publisher's version ) (Open Access
- …