78 research outputs found
CIAGAN: Conditional Identity Anonymization Generative Adversarial Networks
The unprecedented increase in the usage of computer vision technology in
society goes hand in hand with an increased concern in data privacy. In many
real-world scenarios like people tracking or action recognition, it is
important to be able to process the data while taking careful consideration in
protecting people's identity. We propose and develop CIAGAN, a model for image
and video anonymization based on conditional generative adversarial networks.
Our model is able to remove the identifying characteristics of faces and bodies
while producing high-quality images and videos that can be used for any
computer vision task, such as detection or tracking. Unlike previous methods,
we have full control over the de-identification (anonymization) procedure,
ensuring both anonymization as well as diversity. We compare our method to
several baselines and achieve state-of-the-art results.Comment: CVPR 202
DIPPAS: A Deep Image Prior PRNU Anonymization Scheme
Source device identification is an important topic in image forensics since
it allows to trace back the origin of an image. Its forensics counter-part is
source device anonymization, that is, to mask any trace on the image that can
be useful for identifying the source device. A typical trace exploited for
source device identification is the Photo Response Non-Uniformity (PRNU), a
noise pattern left by the device on the acquired images. In this paper, we
devise a methodology for suppressing such a trace from natural images without
significant impact on image quality. Specifically, we turn PRNU anonymization
into an optimization problem in a Deep Image Prior (DIP) framework. In a
nutshell, a Convolutional Neural Network (CNN) acts as generator and returns an
image that is anonymized with respect to the source PRNU, still maintaining
high visual quality. With respect to widely-adopted deep learning paradigms,
our proposed CNN is not trained on a set of input-target pairs of images.
Instead, it is optimized to reconstruct the PRNU-free image from the original
image under analysis itself. This makes the approach particularly suitable in
scenarios where large heterogeneous databases are analyzed and prevents any
problem due to lack of generalization. Through numerical examples on publicly
available datasets, we prove our methodology to be effective compared to
state-of-the-art techniques
Smooth and Consistent Video Anonymization Using Triangular Inpainting and Optical Flow
Surveillance cameras have been deployed extensively in big cities, such as London and Shanghai. To protect people’s privacy and avoid fully exposed, it is necessary to remove sensitive
facial information in the surveillance footage. In this thesis, we study the anonymization of
CCTV footage with face inpainting. In more detail, we employ deep neural networks to generate faces and replace the original faces in the video. Particularly, a masking method called
triangular inpainting is employed to produce videos where the original faces are removed.
Furthermore, we adopted an object detection method Optical Flow to ensure the smooth
movement and transition of the computer generated face when masked on the original face.
The thesis also tries to keep the age and gender of the generated faces to the original subjects
as close as possible. To ensure that each human visible in videos is masked with a unique
face throughout the whole video, we index the original face and the inpainted face for a
one-to-one mapping. The designed system has been tested via extensive experiments. The
results show that the human subjects are anonymized efficiently. The inpainted faces can also
maintain the uniqueness and the smoothness in the video with the age and gender preserved
Smooth and Consistent Video Anonymization Using Triangular Inpainting and Optical Flow
Surveillance cameras have been deployed extensively in big cities, such as London and Shanghai. To protect people’s privacy and avoid fully exposed, it is necessary to remove sensitive facial information in the surveillance footage. In this thesis, we study the anonymization of CCTV footage with face inpainting. In more detail, we employ deep neural networks to generate faces and replace the original faces in the video. Particularly, a masking method called triangular inpainting is employed to produce videos where the original faces are removed. Furthermore, we adopted an object detection method Optical Flow to ensure the smooth movement and transition of the computer generated face when masked on the original face. The thesis also tries to keep the age and gender of the generated faces to the original subjects as close as possible. To ensure that each human visible in videos is masked with a unique face throughout the whole video, we index the original face and the inpainted face for a one-to-one mapping. The designed system has been tested via extensive experiments. The results show that the human subjects are anonymized efficiently. The inpainted faces can also maintain the uniqueness and the smoothness in the video with the age and gender preserved
Deep Image Prior Amplitude SAR Image Anonymization
This paper presents an extensive evaluation of the Deep Image Prior (DIP) technique for image inpainting on Synthetic Aperture Radar (SAR) images. SAR images are gaining popularity in various applications, but there may be a need to conceal certain regions of them. Image inpainting provides a solution for this. However, not all inpainting techniques are designed to work on SAR images. Some are intended for use on photographs, while others have to be specifically trained on top of a huge set of images. In this work, we evaluate the performance of the DIP technique that is capable of addressing these challenges: it can adapt to the image under analysis including SAR imagery; it does not require any training. Our results demonstrate that the DIP method achieves great performance in terms of objective and semantic metrics. This indicates that the DIP method is a promising approach for inpainting SAR images, and can provide high-quality results that meet the requirements of various applications
Who clicks there!: Anonymizing the photographer in a camera saturated society
In recent years, social media has played an increasingly important role in
reporting world events. The publication of crowd-sourced photographs and videos
in near real-time is one of the reasons behind the high impact. However, the
use of a camera can draw the photographer into a situation of conflict.
Examples include the use of cameras by regulators collecting evidence of Mafia
operations; citizens collecting evidence of corruption at a public service
outlet; and political dissidents protesting at public rallies. In all these
cases, the published images contain fairly unambiguous clues about the location
of the photographer (scene viewpoint information). In the presence of adversary
operated cameras, it can be easy to identify the photographer by also combining
leaked information from the photographs themselves. We call this the camera
location detection attack. We propose and review defense techniques against
such attacks. Defenses such as image obfuscation techniques do not protect
camera-location information; current anonymous publication technologies do not
help either. However, the use of view synthesis algorithms could be a promising
step in the direction of providing probabilistic privacy guarantees
A review on visual privacy preservation techniques for active and assisted living
This paper reviews the state of the art in visual privacy protection techniques, with particular attention paid to techniques applicable to the field of Active and Assisted Living (AAL). A novel taxonomy with which state-of-the-art visual privacy protection methods can be classified is introduced. Perceptual obfuscation methods, a category in this taxonomy, is highlighted. These are a category of visual privacy preservation techniques, particularly relevant when considering scenarios that come under video-based AAL monitoring. Obfuscation against machine learning models is also explored. A high-level classification scheme of privacy by design, as defined by experts in privacy and data protection law, is connected to the proposed taxonomy of visual privacy preservation techniques. Finally, we note open questions that exist in the field and introduce the reader to some exciting avenues for future research in the area of visual privacy.Open Access funding provided thanks to the CRUE-CSIC agreement with Springer Nature. This work is part of the visuAAL project on Privacy-Aware and Acceptable Video-Based Technologies and Services for Active and Assisted Living (https://www.visuaal-itn.eu/). This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 861091. The authors would also like to acknowledge the contribution of COST Action CA19121 - GoodBrother, Network on Privacy-Aware Audio- and Video-Based Applications for Active and Assisted Living (https://goodbrother.eu/), supported by COST (European Cooperation in Science and Technology) (https://www.cost.eu/)
Does Image Anonymization Impact Computer Vision Training?
Image anonymization is widely adapted in practice to comply with privacy
regulations in many regions. However, anonymization often degrades the quality
of the data, reducing its utility for computer vision development. In this
paper, we investigate the impact of image anonymization for training computer
vision models on key computer vision tasks (detection, instance segmentation,
and pose estimation). Specifically, we benchmark the recognition drop on common
detection datasets, where we evaluate both traditional and realistic
anonymization for faces and full bodies. Our comprehensive experiments reflect
that traditional image anonymization substantially impacts final model
performance, particularly when anonymizing the full body. Furthermore, we find
that realistic anonymization can mitigate this decrease in performance, where
our experiments reflect a minimal performance drop for face anonymization. Our
study demonstrates that realistic anonymization can enable privacy-preserving
computer vision development with minimal performance degradation across a range
of important computer vision benchmarks.Comment: Accepted at CVPR Workshop on Autonomous Driving 202
Conditional Adversarial Camera Model Anonymization
The model of camera that was used to capture a particular photographic image
(model attribution) is typically inferred from high-frequency model-specific
artifacts present within the image. Model anonymization is the process of
transforming these artifacts such that the apparent capture model is changed.
We propose a conditional adversarial approach for learning such
transformations. In contrast to previous works, we cast model anonymization as
the process of transforming both high and low spatial frequency information. We
augment the objective with the loss from a pre-trained dual-stream model
attribution classifier, which constrains the generative network to transform
the full range of artifacts. Quantitative comparisons demonstrate the efficacy
of our framework in a restrictive non-interactive black-box setting.Comment: ECCV 2020 - Advances in Image Manipulation workshop (AIM 2020
- …