7 research outputs found

    Does Image Anonymization Impact Computer Vision Training?

    Full text link
    Image anonymization is widely adapted in practice to comply with privacy regulations in many regions. However, anonymization often degrades the quality of the data, reducing its utility for computer vision development. In this paper, we investigate the impact of image anonymization for training computer vision models on key computer vision tasks (detection, instance segmentation, and pose estimation). Specifically, we benchmark the recognition drop on common detection datasets, where we evaluate both traditional and realistic anonymization for faces and full bodies. Our comprehensive experiments reflect that traditional image anonymization substantially impacts final model performance, particularly when anonymizing the full body. Furthermore, we find that realistic anonymization can mitigate this decrease in performance, where our experiments reflect a minimal performance drop for face anonymization. Our study demonstrates that realistic anonymization can enable privacy-preserving computer vision development with minimal performance degradation across a range of important computer vision benchmarks.Comment: Accepted at CVPR Workshop on Autonomous Driving 202

    Deep Active Learning for Autonomous Perception

    Get PDF
    Traditional supervised learning requires significant amounts of labeled training data to achieve satisfactory results. As autonomous perception systems collect continuous data, the labeling process becomes expensive and time-consuming. Active learning is a specialized semi-supervised learning strategy that allows a machine learning model to achieve high performance using less training data, thereby minimizing the cost of manual annotation. We explore active learning for autonomous vehicles, and propose a novel deep active learning framework for object detection and instance segmentation. We review prominent active learning approaches, study their performances in the aforementioned computer vision tasks, and perform several experiments using state-of-the-art R-CNN-based models for datasets in the self-driving domain. Our empirical experiments on a number of datasets reflect that active learning reduces the amount of training data required. We observe that early exploration with instance-rich training sets leads to good performance, and that false positives can have a negative impact if not dealt with appropriately. Furthermore, we perform a qualitative evaluation using autonomous driving data collected from Trondheim, illustrating that active learning can help in selecting more informative images to annotate

    Image Inpainting with Learnable Feature Imputation

    Full text link
    A regular convolution layer applying a filter in the same way over known and unknown areas causes visual artifacts in the inpainted image. Several studies address this issue with feature re-normalization on the output of the convolution. However, these models use a significant amount of learnable parameters for feature re-normalization, or assume a binary representation of the certainty of an output. We propose (layer-wise) feature imputation of the missing input values to a convolution. In contrast to learned feature re-normalization, our method is efficient and introduces a minimal number of parameters. Furthermore, we propose a revised gradient penalty for image inpainting, and a novel GAN architecture trained exclusively on adversarial loss. Our quantitative evaluation on the FDF dataset reflects that our revised gradient penalty and alternative convolution improves generated image quality significantly. We present comparisons on CelebA-HQ and Places2 to current state-of-the-art to validate our model

    Autonomous Vehicle Control: End-to-end Learning in Simulated Environments

    Get PDF
    This paper examines end-to-end learning for autonomous vehicles in diverse, simulated environments containing other vehicles, traffic lights, and traffic signs; in weather conditions ranging from sunny to heavy rain. The paper proposes an architecture combing a traditional Convolutional Neural Network with a recurrent layer to facilitate the learning of both spatial and temporal relationships. Furthermore, the paper suggests a model that supports navigational input from the user to facilitate the use of a global route planner to achieve a more comprehensive system. The paper also explores some of the uncertainties regarding the implementation of end-to-end systems. Specifically, how a system’s overall performance is affected by the size of the training dataset, the allowed prediction frequency, and the number of hidden states in the system’s recurrent module. The proposed system is trained using expert driving data captured in various simulated settings and evaluated by its real-time driving performance in unseen simulated environments. The results of the paper indicate that end-to-end systems can operate autonomously in simulated environments, in a range of different weather conditions. Additionally, it was found that using ten hidden states for the system’s recurrent module was optimal. The results further show that the system was sensitive to small reductions in dataset size and that a prediction frequency of 15 Hz was required for the system to perform at its full potential

    DeepPrivacy: A GAN-based framework for image anonymization

    No full text
    Samle data fra selvkjørende biler uten å anonymisere personlig informasjon er ulovlig etter introduksjonen av Personvernforordningen (GDPR) i 2018. For å samle data for å trene og validere maskinlæringsmodeller, må vi anonymisere dataen uten å endre det originale bilde betydelig. Selv med den store framgangen i dyp læring så finnes det ingen løsning som kan automatisk anonymisere fjes uten å ødelegge bildet. Vi presenterer DeepPrivacy; en to-stegs modell som kan automatisk detektere og anonymisere fjes i bilder. Vi presenterer en ny generativ modell som anonymiserer fjes, samtidig som vi beholder den originale data distribusjonen; det vil si, vår generative modell genererer fjes som passer den gitte situasjonen. DeepPrivacy er basert på en betinget Generative Adversarial Network som generer bilder basert på plassering og bakgrunnen av det original fjeset. Videre introduserer vi et diverst datasett av menneskelige fjes, som inkluderer uvanlige rotasjoner av fjes, tildekket fjes, og en stor variasjon i bakgrunner. Til slutt, presenterer vi eksperimentelle resultater som reflekterer evnen til DeepPrivacy til å anonymisere fjes og beholde den originale data distribusjonen. Ettersom at de anonymiserte bildene beholder den originale data distribusjonen, gir det oss muligheten til å videre trene og validere maskinlærings modeller. Fra vår kunnskap finnes det ingen presentert løsning som garanterer anonymisering av bilder uten å ødelegge den original data distribusjonen

    Deep Active Learning for Autonomous Perception

    No full text
    Traditional supervised learning requires significant amounts of labeled training data to achieve satisfactory results. As autonomous perception systems collect continuous data, the labeling process becomes expensive and time-consuming. Active learning is a specialized semi-supervised learning strategy that allows a machine learning model to achieve high performance using less training data, thereby minimizing the cost of manual annotation. We explore active learning for autonomous vehicles, and propose a novel deep active learning framework for object detection and instance segmentation. We review prominent active learning approaches, study their performances in the aforementioned computer vision tasks, and perform several experiments using state-of-the-art R-CNN-based models for datasets in the self-driving domain. Our empirical experiments on a number of datasets reflect that active learning reduces the amount of training data required. We observe that early exploration with instance-rich training sets leads to good performance, and that false positives can have a negative impact if not dealt with appropriately. Furthermore, we perform a qualitative evaluation using autonomous driving data collected from Trondheim, illustrating that active learning can help in selecting more informative images to annotate

    Realistic Full-Body Anonymization with Surface-Guided GANs

    Full text link
    Recent work on image anonymization has shown that generative adversarial networks (GANs) can generate near-photorealistic faces to anonymize individuals. However, scaling these networks to the entire human body has remained a challenging and yet unsolved task. We propose a new anonymization method that generates close-to-photorealistic humans for in-the-wild images.A key part of our design is to guide adversarial nets by dense pixel-to-surface correspondences between an image and a canonical 3D surface.We introduce Variational Surface-Adaptive Modulation (V-SAM) that embeds surface information throughout the generator.Combining this with our novel discriminator surface supervision loss, the generator can synthesize high quality humans with diverse appearance in complex and varying scenes.We show that surface guidance significantly improves image quality and diversity of samples, yielding a highly practical generator.Finally, we demonstrate that surface-guided anonymization preserves the usability of data for future computer vision developmentComment: 8 pages, 7 figures, 6 tables. Source code and appendix available at: https://www.github.com/hukkelas/full_body_anonymizatio
    corecore