6,002 research outputs found
Multi-scale, class-generic, privacy-preserving video
In recent years, high-performance video recording devices have become ubiquitous, posing an unprecedented challenge to preserving personal privacy. As a result, privacy-preserving video systems have been receiving increased attention. In this paper, we present a novel privacy-preserving video algorithm that uses semantic segmentation to identify regions of interest, which are then anonymized with an adaptive blurring algorithm. This algorithm addresses two of the most important shortcomings of existing solutions: it is multi-scale, meaning it can identify and uniformly anonymize objects of different scales in the same image, and it is class-generic, so it can be used to anonymize any class of objects of interest. We show experimentally that our algorithm achieves excellent anonymity while preserving meaning in the visual data processed
Visual Content Privacy Protection: A Survey
Vision is the most important sense for people, and it is also one of the main
ways of cognition. As a result, people tend to utilize visual content to
capture and share their life experiences, which greatly facilitates the
transfer of information. Meanwhile, it also increases the risk of privacy
violations, e.g., an image or video can reveal different kinds of
privacy-sensitive information. Researchers have been working continuously to
develop targeted privacy protection solutions, and there are several surveys to
summarize them from certain perspectives. However, these surveys are either
problem-driven, scenario-specific, or technology-specific, making it difficult
for them to summarize the existing solutions in a macroscopic way. In this
survey, a framework that encompasses various concerns and solutions for visual
privacy is proposed, which allows for a macro understanding of privacy concerns
from a comprehensive level. It is based on the fact that privacy concerns have
corresponding adversaries, and divides privacy protection into three
categories, based on computer vision (CV) adversary, based on human vision (HV)
adversary, and based on CV \& HV adversary. For each category, we analyze the
characteristics of the main approaches to privacy protection, and then
systematically review representative solutions. Open challenges and future
directions for visual privacy protection are also discussed.Comment: 24 pages, 13 figure
GAN-Based Differential Private Image Privacy Protection Framework for the Internet of Multimedia Things.
With the development of the Internet of Multimedia Things (IoMT), an increasing amount of image data is collected by various multimedia devices, such as smartphones, cameras, and drones. This massive number of images are widely used in each field of IoMT, which presents substantial challenges for privacy preservation. In this paper, we propose a new image privacy protection framework in an effort to protect the sensitive personal information contained in images collected by IoMT devices. We aim to use deep neural network techniques to identify the privacy-sensitive content in images, and then protect it with the synthetic content generated by generative adversarial networks (GANs) with differential privacy (DP). Our experiment results show that the proposed framework can effectively protect users' privacy while maintaining image utility
Federated Learning for Connected and Automated Vehicles: A Survey of Existing Approaches and Challenges
Machine learning (ML) is widely used for key tasks in Connected and Automated
Vehicles (CAV), including perception, planning, and control. However, its
reliance on vehicular data for model training presents significant challenges
related to in-vehicle user privacy and communication overhead generated by
massive data volumes. Federated learning (FL) is a decentralized ML approach
that enables multiple vehicles to collaboratively develop models, broadening
learning from various driving environments, enhancing overall performance, and
simultaneously securing local vehicle data privacy and security. This survey
paper presents a review of the advancements made in the application of FL for
CAV (FL4CAV). First, centralized and decentralized frameworks of FL are
analyzed, highlighting their key characteristics and methodologies. Second,
diverse data sources, models, and data security techniques relevant to FL in
CAVs are reviewed, emphasizing their significance in ensuring privacy and
confidentiality. Third, specific and important applications of FL are explored,
providing insight into the base models and datasets employed for each
application. Finally, existing challenges for FL4CAV are listed and potential
directions for future work are discussed to further enhance the effectiveness
and efficiency of FL in the context of CAV
Federated Learning in Intelligent Transportation Systems: Recent Applications and Open Problems
Intelligent transportation systems (ITSs) have been fueled by the rapid
development of communication technologies, sensor technologies, and the
Internet of Things (IoT). Nonetheless, due to the dynamic characteristics of
the vehicle networks, it is rather challenging to make timely and accurate
decisions of vehicle behaviors. Moreover, in the presence of mobile wireless
communications, the privacy and security of vehicle information are at constant
risk. In this context, a new paradigm is urgently needed for various
applications in dynamic vehicle environments. As a distributed machine learning
technology, federated learning (FL) has received extensive attention due to its
outstanding privacy protection properties and easy scalability. We conduct a
comprehensive survey of the latest developments in FL for ITS. Specifically, we
initially research the prevalent challenges in ITS and elucidate the
motivations for applying FL from various perspectives. Subsequently, we review
existing deployments of FL in ITS across various scenarios, and discuss
specific potential issues in object recognition, traffic management, and
service providing scenarios. Furthermore, we conduct a further analysis of the
new challenges introduced by FL deployment and the inherent limitations that FL
alone cannot fully address, including uneven data distribution, limited storage
and computing power, and potential privacy and security concerns. We then
examine the existing collaborative technologies that can help mitigate these
challenges. Lastly, we discuss the open challenges that remain to be addressed
in applying FL in ITS and propose several future research directions
Understanding and controlling leakage in machine learning
Machine learning models are being increasingly adopted in a variety of real-world scenarios. However, the privacy and confidentiality implications introduced in these scenarios are not well understood. Towards better understanding such implications, we focus on scenarios involving interactions between numerous parties prior to, during, and after training relevant models. Central to these interactions is sharing information for a purpose e.g., contributing data samples towards a dataset, returning predictions via an API. This thesis takes a step toward understanding and controlling leakage of private information during such interactions.
In the first part of the thesis we investigate leakage of private information in visual data and specifically, photos representative of content shared on social networks. There is a long line of work to tackle leakage of personally identifiable information in social photos, especially using face- and body-level visual cues. However, we argue this presents only a narrow perspective as images reveal a wide spectrum of multimodal private information (e.g., disabilities, name-tags). Consequently, we work towards a Visual Privacy Advisor that aims to holistically identify and mitigate private risks when sharing social photos. In the second part, we address leakage during training of ML models. We observe learning algorithms are being increasingly used to train models on rich decentralized datasets e.g., personal data on numerous mobile devices. In such cases, information in the form of high-dimensional model parameter updates are anonymously aggregated from participating individuals. However, we find that the updates encode sufficient identifiable information and allows them to be linked back to participating individuals. We additionally propose methods to mitigate this leakage while maintaining high utility of the updates. In the third part, we discuss leakage of confidential information during inference time of black-box models. In particular, we find models lend themselves to model functionality stealing attacks: an adversary can interact with the black-box model towards creating a replica `knock-off' model that exhibits similar test-set performances. As such attacks pose a severe threat to the intellectual property of the model owner, we also work towards effective defenses. Our defense strategy by introducing bounded and controlled perturbations to predictions can significantly amplify model stealing attackers' error rates. In summary, this thesis advances understanding of privacy leakage when information is shared in raw visual forms, during training of models, and at inference time when deployed as black-boxes. In each of the cases, we further propose techniques to mitigate leakage of information to enable wide-spread adoption of techniques in real-world scenarios.Modelle für maschinelles Lernen werden zunehmend in einer Vielzahl realer Szenarien eingesetzt. Die in diesen Szenarien vorgestellten Auswirkungen auf Datenschutz und Vertraulichkeit wurden jedoch nicht vollständig untersucht. Um solche Implikationen besser zu verstehen, konzentrieren wir uns auf Szenarien, die Interaktionen zwischen mehreren Parteien vor, während und nach dem Training relevanter Modelle beinhalten. Das Teilen von Informationen für einen Zweck, z. B. das Einbringen von Datenproben in einen Datensatz oder die Rückgabe von Vorhersagen über eine API, ist zentral für diese Interaktionen. Diese Arbeit verhilft zu einem besseren Verständnis und zur Kontrolle des Verlusts privater Informationen während solcher Interaktionen. Im ersten Teil dieser Arbeit untersuchen wir den Verlust privater Informationen bei visuellen Daten und insbesondere bei Fotos, die für Inhalte repräsentativ sind, die in sozialen Netzwerken geteilt werden. Es gibt eine lange Reihe von Arbeiten, die das Problem des Verlustes persönlich identifizierbarer Informationen in sozialen Fotos angehen, insbesondere mithilfe visueller Hinweise auf Gesichts- und Körperebene. Wir argumentieren jedoch, dass dies nur eine enge Perspektive darstellt, da Bilder ein breites Spektrum multimodaler privater Informationen (z. B. Behinderungen, Namensschilder) offenbaren. Aus diesem Grund arbeiten wir auf einen Visual Privacy Advisor hin, der darauf abzielt, private Risiken beim Teilen sozialer Fotos ganzheitlich zu identifizieren und zu minimieren. Im zweiten Teil befassen wir uns mit Datenverlusten während des Trainings von ML-Modellen. Wir beobachten, dass zunehmend Lernalgorithmen verwendet werden, um Modelle auf umfangreichen dezentralen Datensätzen zu trainieren, z. B. persönlichen Daten auf zahlreichen Mobilgeräten. In solchen Fällen werden Informationen von teilnehmenden Personen in Form von hochdimensionalen Modellparameteraktualisierungen anonym verbunden. Wir stellen jedoch fest, dass die Aktualisierungen ausreichend identifizierbare Informationen codieren und es ermöglichen, sie mit teilnehmenden Personen zu verknüpfen. Wir schlagen zudem Methoden vor, um diesen Datenverlust zu verringern und gleichzeitig die hohe Nützlichkeit der Aktualisierungen zu erhalten. Im dritten Teil diskutieren wir den Verlust vertraulicher Informationen während der Inferenzzeit von Black-Box-Modellen. Insbesondere finden wir, dass sich Modelle für die Entwicklung von Angriffen, die auf Funktionalitätsdiebstahl abzielen, eignen: Ein Gegner kann mit dem Black-Box-Modell interagieren, um ein Replikat-Knock-Off-Modell zu erstellen, das ähnliche Test-Set-Leistungen aufweist. Da solche Angriffe eine ernsthafte Bedrohung für das geistige Eigentum des Modellbesitzers darstellen, arbeiten wir auch an einer wirksamen Verteidigung. Unsere Verteidigungsstrategie durch die Einführung begrenzter und kontrollierter Störungen in Vorhersagen kann die Fehlerraten von Modelldiebstahlangriffen erheblich verbessern. Zusammenfassend lässt sich sagen, dass diese Arbeit das Verständnis von Datenschutzverlusten beim Informationsaustausch verbessert, sei es bei rohen visuellen Formen, während des Trainings von Modellen oder während der Inferenzzeit von Black-Box-Modellen. In jedem Fall schlagen wir ferner Techniken zur Verringerung des Informationsverlusts vor, um eine weit verbreitete Anwendung von Techniken in realen Szenarien zu ermöglichen.Max Planck Institute for Informatic
- …