34,485 research outputs found
Algorithms that Remember: Model Inversion Attacks and Data Protection Law
Many individuals are concerned about the governance of machine learning
systems and the prevention of algorithmic harms. The EU's recent General Data
Protection Regulation (GDPR) has been seen as a core tool for achieving better
governance of this area. While the GDPR does apply to the use of models in some
limited situations, most of its provisions relate to the governance of personal
data, while models have traditionally been seen as intellectual property. We
present recent work from the information security literature around `model
inversion' and `membership inference' attacks, which indicate that the process
of turning training data into machine learned systems is not one-way, and
demonstrate how this could lead some models to be legally classified as
personal data. Taking this as a probing experiment, we explore the different
rights and obligations this would trigger and their utility, and posit future
directions for algorithmic governance and regulation.Comment: 15 pages, 1 figur
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Convolutional neural networks have been widely deployed in various
application scenarios. In order to extend the applications' boundaries to some
accuracy-crucial domains, researchers have been investigating approaches to
boost accuracy through either deeper or wider network structures, which brings
with them the exponential increment of the computational and storage cost,
delaying the responding time. In this paper, we propose a general training
framework named self distillation, which notably enhances the performance
(accuracy) of convolutional neural networks through shrinking the size of the
network rather than aggrandizing it. Different from traditional knowledge
distillation - a knowledge transformation methodology among networks, which
forces student neural networks to approximate the softmax layer outputs of
pre-trained teacher neural networks, the proposed self distillation framework
distills knowledge within network itself. The networks are firstly divided into
several sections. Then the knowledge in the deeper portion of the networks is
squeezed into the shallow ones. Experiments further prove the generalization of
the proposed self distillation framework: enhancement of accuracy at average
level is 2.65%, varying from 0.61% in ResNeXt as minimum to 4.07% in VGG19 as
maximum. In addition, it can also provide flexibility of depth-wise scalable
inference on resource-limited edge devices.Our codes will be released on github
soon.Comment: 10page
DeepSecure: Scalable Provably-Secure Deep Learning
This paper proposes DeepSecure, a novel framework that enables scalable
execution of the state-of-the-art Deep Learning (DL) models in a
privacy-preserving setting. DeepSecure targets scenarios in which neither of
the involved parties including the cloud servers that hold the DL model
parameters or the delegating clients who own the data is willing to reveal
their information. Our framework is the first to empower accurate and scalable
DL analysis of data generated by distributed clients without sacrificing the
security to maintain efficiency. The secure DL computation in DeepSecure is
performed using Yao's Garbled Circuit (GC) protocol. We devise GC-optimized
realization of various components used in DL. Our optimized implementation
achieves more than 58-fold higher throughput per sample compared with the
best-known prior solution. In addition to our optimized GC realization, we
introduce a set of novel low-overhead pre-processing techniques which further
reduce the GC overall runtime in the context of deep learning. Extensive
evaluations of various DL applications demonstrate up to two
orders-of-magnitude additional runtime improvement achieved as a result of our
pre-processing methodology. This paper also provides mechanisms to securely
delegate GC computations to a third party in constrained embedded settings
CONFLLVM: A Compiler for Enforcing Data Confidentiality in Low-Level Code
We present an instrumenting compiler for enforcing data confidentiality in
low-level applications (e.g. those written in C) in the presence of an active
adversary. In our approach, the programmer marks secret data by writing
lightweight annotations on top-level definitions in the source code. The
compiler then uses a static flow analysis coupled with efficient runtime
instrumentation, a custom memory layout, and custom control-flow integrity
checks to prevent data leaks even in the presence of low-level attacks. We have
implemented our scheme as part of the LLVM compiler. We evaluate it on the SPEC
micro-benchmarks for performance, and on larger, real-world applications
(including OpenLDAP, which is around 300KLoC) for programmer overhead required
to restructure the application when protecting the sensitive data such as
passwords. We find that performance overheads introduced by our instrumentation
are moderate (average 12% on SPEC), and the programmer effort to port OpenLDAP
is only about 160 LoC.Comment: Technical report for CONFLLVM: A Compiler for Enforcing Data
Confidentiality in Low-Level Code, appearing at EuroSys 201
- …