22 research outputs found
Virtual Machine Images Preconfigured with Security Scripts for Data Protection and Alerting
Developers use interactive development environments (IDEs) to create and share documents that contain live code, equations, visualizations, narrative text, etc. as part of the artificial intelligence/ machine learning (AI/ML) development process. Virtual machines that run IDEs may have access to private and/or sensitive data used during model training or use. For data security and compliance, it is necessary to highlight and track the VMs that have been in contact with sensitive information. This disclosure describes techniques to automatically identify and label the presence of sensitive data in virtual machines and disks as part of machine learning. Custom VM images are provided that include data scanning scripts that can identify the presence of sensitive data during or after usage, e.g., by a developer using an IDE. The scripts can automatically log the presence of data and generate alerts. Users of such virtual machines are provided additional controls to perform the training process in a secure and confidential manner in compliance with applicable data regulations
Privacy-Preserving Medical Image Classification through Deep Learning and Matrix Decomposition
Deep learning (DL)-based solutions have been extensively researched in the
medical domain in recent years, enhancing the efficacy of diagnosis, planning,
and treatment. Since the usage of health-related data is strictly regulated,
processing medical records outside the hospital environment for developing and
using DL models demands robust data protection measures. At the same time, it
can be challenging to guarantee that a DL solution delivers a minimum level of
performance when being trained on secured data, without being specifically
designed for the given task. Our approach uses singular value decomposition
(SVD) and principal component analysis (PCA) to obfuscate the medical images
before employing them in the DL analysis. The capability of DL algorithms to
extract relevant information from secured data is assessed on a task of
angiographic view classification based on obfuscated frames. The security level
is probed by simulated artificial intelligence (AI)-based reconstruction
attacks, considering two threat actors with different prior knowledge of the
targeted data. The degree of privacy is quantitatively measured using
similarity indices. Although a trade-off between privacy and accuracy should be
considered, the proposed technique allows for training the angiographic view
classifier exclusively on secured data with satisfactory performance and with
no computational overhead, model adaptation, or hyperparameter tuning. While
the obfuscated medical image content is well protected against human
perception, the hypothetical reconstruction attack proved that it is also
difficult to recover the complete information of the original frames.Comment: 6 pages, 9 figures, Published in: 2023 31st Mediterranean Conference
on Control and Automation (MED
Dataset Obfuscation: Its Applications to and Impacts on Edge Machine Learning
Obfuscating a dataset by adding random noises to protect the privacy of
sensitive samples in the training dataset is crucial to prevent data leakage to
untrusted parties for edge applications. We conduct comprehensive experiments
to investigate how the dataset obfuscation can affect the resultant model
weights - in terms of the model accuracy, Frobenius-norm (F-norm)-based model
distance, and level of data privacy - and discuss the potential applications
with the proposed Privacy, Utility, and Distinguishability (PUD)-triangle
diagram to visualize the requirement preferences. Our experiments are based on
the popular MNIST and CIFAR-10 datasets under both independent and identically
distributed (IID) and non-IID settings. Significant results include a trade-off
between the model accuracy and privacy level and a trade-off between the model
difference and privacy level. The results indicate broad application prospects
for training outsourcing in edge computing and guarding against attacks in
Federated Learning among edge devices.Comment: 6 page
BrainâMachine Interfaces: The Role of the Neurosurgeon
The neurotechnology field is set to expand rapidly in the coming years as technological innovations in hardware and software are translated to the clinical setting. Given our unique access to patients with neurological disorders, expertise with which to guide appropriate treatments and technical skills to implant brain-machine interfaces (BMIs), neurosurgeons have a key role to play in the progress of this field.
We outline the current state and key challenges in this rapidly advancing field including implant technology, implant recipients, implantation methodology, implant function, ethical, regulatory and economic considerations. Our key message is to encourage the neurosurgical community to proactively engage in collaborating with other healthcare professionals, engineers, scientists, ethicists and regulators in tackling these issues. By doing so, we will equip ourselves with the skills and expertise to drive the field forward and avoid being mere technicians in an industry driven by those around us
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
Today's success of state of the art methods for semantic segmentation is
driven by large datasets. Data is considered an important asset that needs to
be protected, as the collection and annotation of such datasets comes at
significant efforts and associated costs. In addition, visual data might
contain private or sensitive information, that makes it equally unsuited for
public release. Unfortunately, recent work on membership inference in the
broader area of adversarial machine learning and inference attacks on machine
learning models has shown that even black box classifiers leak information on
the dataset that they were trained on. We show that such membership inference
attacks can be successfully carried out on complex, state of the art models for
semantic segmentation. In order to mitigate the associated risks, we also study
a series of defenses against such membership inference attacks and find
effective counter measures against the existing risks with little effect on the
utility of the segmentation method. Finally, we extensively evaluate our
attacks and defenses on a range of relevant real-world datasets: Cityscapes,
BDD100K, and Mapillary Vistas.Comment: Accepted to ECCV 2020. Code at:
https://github.com/SSAW14/segmentation_membership_inferenc