9 research outputs found
User Label Leakage from Gradients in Federated Learning
Federated learning enables multiple users to build a joint model by sharing
their model updates (gradients), while their raw data remains local on their
devices. In contrast to the common belief that this provides privacy benefits,
we here add to the very recent results on privacy risks when sharing gradients.
Specifically, we propose Label Leakage from Gradients (LLG), a novel attack to
extract the labels of the users' training data from their shared gradients. The
attack exploits the direction and magnitude of gradients to determine the
presence or absence of any label. LLG is simple yet effective, capable of
leaking potential sensitive information represented by labels, and scales well
to arbitrary batch sizes and multiple classes. We empirically and
mathematically demonstrate the validity of our attack under different settings.
Moreover, empirical results show that LLG successfully extracts labels with
high accuracy at the early stages of model training. We also discuss different
defense mechanisms against such leakage. Our findings suggest that gradient
compression is a practical technique to prevent our attack
Predictive Whittle Networks for Time Series
Dataset for paper "Predictive Whittle Networks for Time Series"
Use with code at:
https://github.com/ml-research/PW
DAFNe: A One-Stage Anchor-Free Approach for Oriented Object Detection
We present DAFNe, a Dense one-stage Anchor-Free deep Network for oriented
object detection. As a one-stage model, it performs bounding box predictions on
a dense grid over the input image, being architecturally simpler in design, as
well as easier to optimize than its two-stage counterparts. Furthermore, as an
anchor-free model, it reduces the prediction complexity by refraining from
employing bounding box anchors. With DAFNe we introduce an orientation-aware
generalization of the center-ness function for arbitrarily oriented bounding
boxes to down-weight low-quality predictions and a center-to-corner bounding
box prediction strategy that improves object localization performance. Our
experiments show that DAFNe outperforms all previous one-stage anchor-free
models on DOTA 1.0, DOTA 1.5, and UCAS-AOD and is on par with the best models
on HRSC2016.Comment: Main paper: 8 pages, References: 2 pages, Appendix: 7 pages; Main
paper: 6 figures, Appendix: 6 figure
End-to-end learning of deep spatio-temporal representations for satellite image time series classification
In this paper we describe our first-place solution to the discovery challenge on time series land cover classification (TiSeLaC), organized in conjunction of ECML PKDD 2017. The challenge consists in predicting the Land Cover class of a set of pixels given their image time series data acquired by the satellites. We propose an end-to-end learning approach employing both temporal and spatial information and requiring very little data preprocessing and feature engineering. In this report we detail the architecture that ranked first-out of 21 teams-comprising modules using dense multi-layer perceptrons and one-dimensional convolutional neural networks. We discuss this architecture properties in detail as well as several possible enhancements
User-Level Label Leakage from Gradients in Federated Learning
Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings.
Moreover, empirical results show that LLG successfully extracts labels
with high accuracy at the early stages of model training. We also discuss
different defense mechanisms against such leakage. Our findings suggest
that gradient compression is a practical technique to mitigate the attack
User-Level Label Leakage from Gradients in Federated Learning
Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to mitigat