774 research outputs found
Hiding in the Crowd: A Massively Distributed Algorithm for Private Averaging with Malicious Adversaries
The amount of personal data collected in our everyday interactions with
connected devices offers great opportunities for innovative services fueled by
machine learning, as well as raises serious concerns for the privacy of
individuals. In this paper, we propose a massively distributed protocol for a
large set of users to privately compute averages over their joint data, which
can then be used to learn predictive models. Our protocol can find a solution
of arbitrary accuracy, does not rely on a third party and preserves the privacy
of users throughout the execution in both the honest-but-curious and malicious
adversary models. Specifically, we prove that the information observed by the
adversary (the set of maliciours users) does not significantly reduce the
uncertainty in its prediction of private values compared to its prior belief.
The level of privacy protection depends on a quantity related to the Laplacian
matrix of the network graph and generally improves with the size of the graph.
Furthermore, we design a verification procedure which offers protection against
malicious users joining the service with the goal of manipulating the outcome
of the algorithm.Comment: 17 page
Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties
Learning from data owned by several parties, as in federated learning, raises
challenges regarding the privacy guarantees provided to participants and the
correctness of the computation in the presence of malicious parties. We tackle
these challenges in the context of distributed averaging, an essential building
block of distributed and federated learning. Our first contribution is a novel
distributed differentially private protocol which naturally scales with the
number of parties. The key idea underlying our protocol is to exchange
correlated Gaussian noise along the edges of a network graph, complemented by
independent noise added by each party. We analyze the differential privacy
guarantees of our protocol and the impact of the graph topology, showing that
we can match the accuracy of the trusted curator model even when each party
communicates with only a logarithmic number of other parties chosen at random.
This is in contrast with protocols in the local model of privacy (with lower
accuracy) or based on secure aggregation (where all pairs of users need to
exchange messages). Our second contribution is to enable users to prove the
correctness of their computations without compromising the efficiency and
privacy guarantees of the protocol. Our construction relies on standard
cryptographic primitives like commitment schemes and zero knowledge proofs.Comment: 39 page
A competing risks approach for nonparametric estimation of transition probabilities in a non-Markov illness-death model
Competing risks model time to first event and type of first event. An example
from hospital epidemiology is the incidence of hospital-acquired infection,
which has to account for hospital discharge of non-infected patients as a
competing risk. An illness-death model would allow to further study hospital
outcomes of infected patients. Such a model typically relies on a Markov
assumption. However, it is conceivable that the future course of an infected
patient does not only depend on the time since hospital admission and current
infection status but also on the time since infection. We demonstrate how a
modified competing risks model can be used for nonparametric estimation of
transition probabilities when the Markov assumption is violated
Analysis of anisotropy crossover due to oxygen in Pt/Co/MOx trilayer
Extraordinary Hall effect and X-ray spectroscopy measurements have been
performed on a series of Pt/Co/MOx trilayers (M=Al, Mg, Ta...) in order to
investigate the role of oxidation in the onset of perpendicular magnetic
anisotropy at the Co/MOx interface. It is observed that varying the oxidation
time modifies the magnetic properties of the Co layer, inducing a magnetic
anisotropy crossover from in-plane to out-of-plane. We focused on the influence
of plasma oxidation on Pt/Co/AlOx perpendicular magnetic anisotropy. The
interfacial electronic structure is analyzed via X-ray photoelectron
spectroscopy measurements. It is shown that the maximum of out-of-plane
magnetic anisotropy corresponds to the appearance of a significant density of
Co-O bondings at the Co/AlOx interface
An Accurate, Scalable and Verifiable Protocol for Federated Differentially Private Averaging
International audienceLearning from data owned by several parties, as in federated learning, raises challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence of malicious parties. We tackle these challenges in the context of distributed averaging, an essential building block of federated learning algorithms. Our first contribution is a scalable protocol in which participants exchange correlated Gaussian noise along the edges of a graph, complemented by independent noise added by each party. We analyze the differential privacy guarantees of our protocol and the impact of the graph topology under colluding malicious parties, showing that we can nearly match the utility of the trusted curator model even when each honest party communicates with only a logarithmic number of other parties chosen at random. This is in contrast with protocols in the local model of privacy (with lower utility) or based on secure aggregation (where all pairs of users need to exchange messages). Our second contribution enables users to prove the correctness of their computations without compromising the efficiency and privacy guarantees of the protocol. Our construction relies on standard cryptographic primitives like commitment schemes and zero knowledge proofs
Distributed Differentially Private Averaging with Improved Utility and Robustness to Malicious Parties
39 pagesLearning from data owned by several parties, as in federated learning, raises challenges regarding the privacy guarantees provided to participants and the correctness of the computation in the presence of malicious parties. We tackle these challenges in the context of distributed averaging, an essential building block of distributed and federated learning. Our first contribution is a novel distributed differentially private protocol which naturally scales with the number of parties. The key idea underlying our protocol is to exchange correlated Gaussian noise along the edges of a network graph, complemented by independent noise added by each party. We analyze the differential privacy guarantees of our protocol and the impact of the graph topology, showing that we can match the accuracy of the trusted curator model even when each party communicates with only a logarithmic number of other parties chosen at random. This is in contrast with protocols in the local model of privacy (with lower accuracy) or based on secure aggregation (where all pairs of users need to exchange messages). Our second contribution is to enable users to prove the correctness of their computations without compromising the efficiency and privacy guarantees of the protocol. Our construction relies on standard cryptographic primitives like commitment schemes and zero knowledge proofs
Hiding in the Crowd: A Massively Distributed Algorithm for Private Averaging with Malicious Adversaries
The amount of personal data collected in our everyday interactions with connected devices offers great opportunities for innovative services fueled by machine learning, as well as raises serious concerns for the privacy of individuals. In this paper, we propose a massively distributed protocol for a large set of users to privately compute averages over their joint data, which can then be used to learn predictive models. Our protocol can find a solution of arbitrary accuracy, does not rely on a third party and preserves the privacy of users throughout the execution in both the honest-but-curious and malicious adversary models. Specifically, we prove that the information observed by the adversary (the set of maliciours users) does not significantly reduce the uncertainty in its prediction of private values compared to its prior belief. The level of privacy protection depends on a quantity related to the Laplacian matrix of the network graph and generally improves with the size of the graph. Furthermore, we design a verification procedure which offers protection against malicious users joining the service with the goal of manipulating the outcome of the algorithm
Ranging Sensor Fusion in LISA Data Processing: Treatment of Ambiguities, Noise, and On-Board Delays in LISA Ranging Observables
Interspacecraft ranging is crucial for the suppression of laser frequency
noise via time-delay interferometry (TDI). So far, the effect of on-board
delays and ambiguities in the LISA ranging observables was neglected in LISA
modelling and data processing investigations. In reality, on-board delays cause
offsets and timestamping delays in the LISA measurements, and PRN ranging is
ambiguous, as it only determines the range up to an integer multiple of the
pseudo-random noise (PRN) code length. In this article, we identify the four
LISA ranging observables: PRN ranging, the sideband beatnotes at the
interspacecraft interferometer, TDI ranging, and ground-based observations. We
derive their observation equations in the presence of on-board delays, noise,
and ambiguities. We then propose a three-stage ranging sensor fusion to combine
these observables in order to gain optimal ranging estimates. We propose to
calibrate the on-board delays on ground and to compensate the associated
offsets and timestamping delays in an initial data treatment (stage 1). We
identify the ranging-related routines, which need to run continuously during
operation (stage 2), and implement them numerically. Essentially, this involves
the reduction of ranging noise, for which we develop a Kalman filter combining
the PRN ranging and the sideband beatnotes. We further implement crosschecks
for the PRN ranging ambiguities and offsets (stage 3). We show that both
ground-based observations and TDI ranging can be used to resolve the PRN
ranging ambiguities. Moreover, we apply TDI ranging to estimate the PRN ranging
offsets
- …