311 research outputs found
Measuring Membership Privacy on Aggregate Location Time-Series
While location data is extremely valuable for various applications,
disclosing it prompts serious threats to individuals' privacy. To limit such
concerns, organizations often provide analysts with aggregate time-series that
indicate, e.g., how many people are in a location at a time interval, rather
than raw individual traces. In this paper, we perform a measurement study to
understand Membership Inference Attacks (MIAs) on aggregate location
time-series, where an adversary tries to infer whether a specific user
contributed to the aggregates.
We find that the volume of contributed data, as well as the regularity and
particularity of users' mobility patterns, play a crucial role in the attack's
success. We experiment with a wide range of defenses based on generalization,
hiding, and perturbation, and evaluate their ability to thwart the attack
vis-a-vis the utility loss they introduce for various mobility analytics tasks.
Our results show that some defenses fail across the board, while others work
for specific tasks on aggregate location time-series. For instance, suppressing
small counts can be used for ranking hotspots, data generalization for
forecasting traffic, hotspot discovery, and map inference, while sampling is
effective for location labeling and anomaly detection when the dataset is
sparse. Differentially private techniques provide reasonable accuracy only in
very specific settings, e.g., discovering hotspots and forecasting their
traffic, and more so when using weaker privacy notions like crowd-blending
privacy. Overall, our measurements show that there does not exist a unique
generic defense that can preserve the utility of the analytics for arbitrary
applications, and provide useful insights regarding the disclosure of sanitized
aggregate location time-series
Security Evaluation of Support Vector Machines in Adversarial Environments
Support Vector Machines (SVMs) are among the most popular classification
techniques adopted in security applications like malware detection, intrusion
detection, and spam filtering. However, if SVMs are to be incorporated in
real-world security systems, they must be able to cope with attack patterns
that can either mislead the learning algorithm (poisoning), evade detection
(evasion), or gain information about their internal parameters (privacy
breaches). The main contributions of this chapter are twofold. First, we
introduce a formal general framework for the empirical evaluation of the
security of machine-learning systems. Second, according to our framework, we
demonstrate the feasibility of evasion, poisoning and privacy attacks against
SVMs in real-world security problems. For each attack technique, we evaluate
its impact and discuss whether (and how) it can be countered through an
adversary-aware design of SVMs. Our experiments are easily reproducible thanks
to open-source code that we have made available, together with all the employed
datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector
Machine Applications
AI alignment and generalization in deep learning
This thesis covers a number of works in deep learning aimed at understanding and improving generalization abilities of deep neural networks (DNNs).
DNNs achieve unrivaled performance in a growing range of tasks and domains, yet their behavior during learning and deployment remains poorly understood.
They can also be surprisingly brittle: in-distribution generalization can be a poor predictor of behavior or performance under distributional shifts, which typically cannot be avoided in practice.
While these limitations are not unique to DNNs -- and indeed are likely to be challenges facing any AI systems of sufficient complexity -- the prevalence and power of DNNs makes them particularly worthy of study.
I frame these challenges within the broader context of "AI Alignment": a nascent field focused on ensuring that AI systems behave in accordance with their user's intentions.
While making AI systems more intelligent or capable can help make them more aligned, it is neither necessary nor sufficient for alignment. However, being able to align state-of-the-art AI systems (e.g. DNNs) is of great social importance in order to avoid undesirable and unsafe behavior from advanced AI systems.
Without progress in AI Alignment, advanced AI systems might pursue objectives at odds with human survival, posing an existential risk (``x-risk'') to humanity.
A core tenet of this thesis is that the achieving high performance on machine learning benchmarks if often a good indicator of AI systems' capabilities, but not their alignment. This is because AI systems often achieve high performance in unexpected ways that reveal the limitations of our performance metrics, and more generally, our techniques for specifying our intentions. Learning about human intentions using DNNs shows some promise, but DNNs are still prone to learning to solve tasks using concepts of "features" very different from those which are salient to humans. Indeed, this is a major source of their poor generalization on out-of-distribution data.
By better understanding the successes and failures of DNN generalization and current methods of specifying our intentions, we aim to make progress towards deep-learning based AI systems that are able to understand users' intentions and act accordingly.Cette thèse discute quelques travaux en apprentissage profond visant à comprendre et à améliorer les capacités de généralisation des réseaux de neurones profonds (DNN). Les DNNs atteignent des performances inégalées dans un éventail croissant de tâches et de domaines, mais leur comportement pendant l'apprentissage et le déploiement reste mal compris. Ils peuvent également être étonnamment fragiles: la généralisation dans la distribution peut être un mauvais prédicteur du comportement ou de la performance lors de changements de distribution, ce qui ne peut généralement pas être évité dans la pratique. Bien que ces limitations ne soient pas propres aux DNN - et sont en effet susceptibles de constituer des défis pour tout système d'IA suffisamment complexe - la prévalence et la puissance des DNN les rendent particulièrement dignes d'étude. J'encadre ces défis dans le contexte plus large de «l'alignement de l'IA»: un domaine naissant axé sur la garantie que les systèmes d'IA se comportent conformément aux intentions de leurs utilisateurs. Bien que rendre les systèmes d'IA plus intelligents ou capables puisse aider à les rendre plus alignés, cela n'est ni nécessaire ni suffisant pour l'alignement. Cependant, être capable d'aligner les systèmes d'IA de pointe (par exemple les DNN) est d'une grande importance sociale afin d'éviter les comportements indésirables et dangereux des systèmes d'IA avancés. Sans progrès dans l'alignement de l'IA, les systèmes d'IA avancés pourraient poursuivre des objectifs contraires à la survie humaine, posant un risque existentiel («x-risque») pour l'humanité. L'un des principes fondamentaux de cette thèse est que l'obtention de hautes performances sur les repères d'apprentissage automatique est souvent un bon indicateur des capacités des systèmes d'IA, mais pas de leur alignement. En effet, les systèmes d'IA atteignent souvent des performances élevées de manière inattendue, ce qui révèle les limites de nos mesures de performance et, plus généralement, de nos techniques pour spécifier nos intentions. L'apprentissage des intentions humaines à l'aide des DNN est quelque peu prometteur, mais les DNN sont toujours enclins à apprendre à résoudre des tâches en utilisant des concepts de «caractéristiques» très différents de ceux qui sont saillants pour les humains. En effet, c'est une source majeure de leur mauvaise généralisation sur les données hors distribution. En comprenant mieux les succès et les échecs de la généralisation DNN et les méthodes actuelles de spécification de nos intentions, nous visons à progresser vers des systèmes d'IA basés sur l'apprentissage en profondeur qui sont capables de comprendre les intentions des utilisateurs et d'agir en conséquence
Inverting Adversarially Robust Networks for Image Synthesis
Recent research in adversarially robust classifiers suggests their
representations tend to be aligned with human perception, which makes them
attractive for image synthesis and restoration applications. Despite favorable
empirical results on a few downstream tasks, their advantages are limited to
slow and sensitive optimization-based techniques. Moreover, their use on
generative models remains unexplored. This work proposes the use of robust
representations as a perceptual primitive for feature inversion models, and
show its benefits with respect to standard non-robust image features. We
empirically show that adopting robust representations as an image prior
significantly improves the reconstruction accuracy of CNN-based feature
inversion models. Furthermore, it allows reconstructing images at multiple
scales out-of-the-box. Following these findings, we propose an
encoding-decoding network based on robust representations and show its
advantages for applications such as anomaly detection, style transfer and image
denoising
- …