6,476 research outputs found
Differential Privacy of Aggregated DC Optimal Power Flow Data
We consider the problem of privately releasing aggregated network statistics
obtained from solving a DC optimal power flow (OPF) problem. It is shown that
the mechanism that determines the noise distribution parameters are linked to
the topology of the power system and the monotonicity of the network. We derive
a measure of "almost" monotonicity and show how it can be used in conjunction
with a linear program in order to release aggregated OPF data using the
differential privacy framework.Comment: Accepted by 2019 American Control Conference (ACC
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
Machine Learning (ML) algorithms are used to train computers to perform a
variety of complex tasks and improve with experience. Computers learn how to
recognize patterns, make unintended decisions, or react to a dynamic
environment. Certain trained machines may be more effective than others because
they are based on more suitable ML algorithms or because they were trained
through superior training sets. Although ML algorithms are known and publicly
released, training sets may not be reasonably ascertainable and, indeed, may be
guarded as trade secrets. While much research has been performed about the
privacy of the elements of training sets, in this paper we focus our attention
on ML classifiers and on the statistical information that can be unconsciously
or maliciously revealed from them. We show that it is possible to infer
unexpected but useful information from ML classifiers. In particular, we build
a novel meta-classifier and train it to hack other classifiers, obtaining
meaningful information about their training sets. This kind of information
leakage can be exploited, for example, by a vendor to build more effective
classifiers or to simply acquire trade secrets from a competitor's apparatus,
potentially violating its intellectual property rights
Secure and Privacy-Preserving Data Aggregation Protocols for Wireless Sensor Networks
This chapter discusses the need of security and privacy protection mechanisms
in aggregation protocols used in wireless sensor networks (WSN). It presents a
comprehensive state of the art discussion on the various privacy protection
mechanisms used in WSNs and particularly focuses on the CPDA protocols proposed
by He et al. (INFOCOM 2007). It identifies a security vulnerability in the CPDA
protocol and proposes a mechanism to plug that vulnerability. To demonstrate
the need of security in aggregation process, the chapter further presents
various threats in WSN aggregation mechanisms. A large number of existing
protocols for secure aggregation in WSN are discussed briefly and a protocol is
proposed for secure aggregation which can detect false data injected by
malicious nodes in a WSN. The performance of the protocol is also presented.
The chapter concludes while highlighting some future directions of research in
secure data aggregation in WSNs.Comment: 32 pages, 7 figures, 3 table
Differentially Private Optimal Power Flow for Distribution Grids
Although distribution grid customers are obliged to share their consumption
data with distribution system operators (DSOs), a possible leakage of this data
is often disregarded in operational routines of DSOs. This paper introduces a
privacy-preserving optimal power flow (OPF) mechanism for distribution grids
that secures customer privacy from unauthorised access to OPF solutions, e.g.,
current and voltage measurements. The mechanism is based on the framework of
differential privacy that allows to control the participation risks of
individuals in a dataset by applying a carefully calibrated noise to the output
of a computation. Unlike existing private mechanisms, this mechanism does not
apply the noise to the optimization parameters or its result. Instead, it
optimizes OPF variables as affine functions of the random noise, which weakens
the correlation between the grid loads and OPF variables. To ensure feasibility
of the randomized OPF solution, the mechanism makes use of chance constraints
enforced on the grid limits. The mechanism is further extended to control the
optimality loss induced by the random noise, as well as the variance of OPF
variables. The paper shows that the differentially private OPF solution does
not leak customer loads up to specified parameters
- …