1,018 research outputs found

    Visual Privacy Protection Methods: A Survey

    Get PDF
    Recent advances in computer vision technologies have made possible the development of intelligent monitoring systems for video surveillance and ambient-assisted living. By using this technology, these systems are able to automatically interpret visual data from the environment and perform tasks that would have been unthinkable years ago. These achievements represent a radical improvement but they also suppose a new threat to individual’s privacy. The new capabilities of such systems give them the ability to collect and index a huge amount of private information about each individual. Next-generation systems have to solve this issue in order to obtain the users’ acceptance. Therefore, there is a need for mechanisms or tools to protect and preserve people’s privacy. This paper seeks to clarify how privacy can be protected in imagery data, so as a main contribution a comprehensive classification of the protection methods for visual privacy as well as an up-to-date review of them are provided. A survey of the existing privacy-aware intelligent monitoring systems and a valuable discussion of important aspects of visual privacy are also provided.This work has been partially supported by the Spanish Ministry of Science and Innovation under project “Sistema de visión para la monitorización de la actividad de la vida diaria en el hogar” (TIN2010-20510-C04-02) and by the European Commission under project “caring4U - A study on people activity in private spaces: towards a multisensor network that meets privacy requirements” (PIEF-GA-2010-274649). José Ramón Padilla López and Alexandros Andre Chaaraoui acknowledge financial support by the Conselleria d'Educació, Formació i Ocupació of the Generalitat Valenciana (fellowship ACIF/2012/064 and ACIF/2011/160 respectively)

    Privacy Protection for Life-log System

    Get PDF
    Tremendous advances in wearable computing and storage technologies enable us to record not just snapshots of an event but the whole human experience for a long period of time. Such a \life-logandamp;quot; system captures important events as they happen, rather than an after-thought. Such a system has applications in many areas such as law enforcement, personal archives, police questioning, and medicine. Much of the existing eandamp;reg;orts focus on the pattern recognition and information retrieval aspects of the system. On the other hand, the privacy issues raised by such an intrusive system have not received much attention from the research community. The objectives of this research project are two-fold: andamp;macr;rst, to construct a wearable life-log video system, and second, to provide a solution for protecting the identity of the subjects in the video while keeping the video useful. In this thesis work, we designed a portable wearable life-log system that implements audio distortion and face blocking in a real time to protect the privacy of the subjects who are being recorded in life-log video. For audio, our system automatically isolates the subject\u27s speech and distorts it using a pitch- shifting algorithm to conceal the identity. For video, our system uses a real-time face detection, tracking and blocking algorithm to obfuscate the faces of the subjects. Extensive experiments have been conducted on interview videos to demonstrate the ability of our system in protecting the identity of the subject while maintaining the usability of the life-log video

    Crowdsourcing Approach for Evaluation of Privacy Filters in Video Surveillance

    Get PDF
    Extensive adoption of video surveillance, affecting many aspects of the daily life, alarms the concerned public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks. In this paper, we propose conducting a subjective evaluation using crowdsourcing to analyze the tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. As an example, the proposed method is used to compare several commonly employed privacy protection techniques, such as blurring, pixelization, and masking applied to indoor surveillance video. Facebook based crowdsourcing application was specifically developed to gather the subjective evaluation data. Based on more than one hundred participants, the evaluation results demonstrate that the pixelization filter provides the best performance in terms of balance between privacy protection and intelligibility. The results obtained with crowdsourcing application were compared with results of previous work using more conventional subjective tests showing that they are highly correlated

    Privacy-Protecting Techniques for Behavioral Data: A Survey

    Get PDF
    Our behavior (the way we talk, walk, or think) is unique and can be used as a biometric trait. It also correlates with sensitive attributes like emotions. Hence, techniques to protect individuals privacy against unwanted inferences are required. To consolidate knowledge in this area, we systematically reviewed applicable anonymization techniques. We taxonomize and compare existing solutions regarding privacy goals, conceptual operation, advantages, and limitations. Our analysis shows that some behavioral traits (e.g., voice) have received much attention, while others (e.g., eye-gaze, brainwaves) are mostly neglected. We also find that the evaluation methodology of behavioral anonymization techniques can be further improved

    MediaEval 2015 Drone Protect Task: Privacy Protection in Surveillance Systems Using False Coloring

    Get PDF
    In this paper, we share our results for privacy protection using false coloring in surveillance systems in the Drone Protect Task. The aim is obscuring sensitive regions that are privacy related without sacrificing intelligibility and pleasantness. The idea in false coloring is transforming the colors of an image using a color palette into a different set of colors in which private information is harder to recognize. The method can be applied globally to the whole frame or to a given region of interest (ROI). The privacy protected output has a pleasant look, and if desired, it can be reversed to obtain a close approximation to the original. Benchmarking evaluations on the Mini-drone dataset show promising results especially for intelligibility and pleasantness criteria

    A framework for objective evaluation of privacy filters in video surveillance

    Get PDF
    Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets

    De-identification for privacy protection in multimedia content : A survey

    Get PDF
    This document is the Accepted Manuscript version of the following article: Slobodan Ribaric, Aladdin Ariyaeeinia, and Nikola Pavesic, ‘De-identification for privacy protection in multimedia content: A survey’, Signal Processing: Image Communication, Vol. 47, pp. 131-151, September 2016, doi: https://doi.org/10.1016/j.image.2016.05.020. This manuscript version is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives License CC BY NC-ND 4.0 (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited, and is not altered, transformed, or built upon in any way.Privacy is one of the most important social and political issues in our information society, characterized by a growing range of enabling and supporting technologies and services. Amongst these are communications, multimedia, biometrics, big data, cloud computing, data mining, internet, social networks, and audio-video surveillance. Each of these can potentially provide the means for privacy intrusion. De-identification is one of the main approaches to privacy protection in multimedia contents (text, still images, audio and video sequences and their combinations). It is a process for concealing or removing personal identifiers, or replacing them by surrogate personal identifiers in personal information in order to prevent the disclosure and use of data for purposes unrelated to the purpose for which the information was originally obtained. Based on the proposed taxonomy inspired by the Safe Harbour approach, the personal identifiers, i.e., the personal identifiable information, are classified as non-biometric, physiological and behavioural biometric, and soft biometric identifiers. In order to protect the privacy of an individual, all of the above identifiers will have to be de-identified in multimedia content. This paper presents a review of the concepts of privacy and the linkage among privacy, privacy protection, and the methods and technologies designed specifically for privacy protection in multimedia contents. The study provides an overview of de-identification approaches for non-biometric identifiers (text, hairstyle, dressing style, license plates), as well as for the physiological (face, fingerprint, iris, ear), behavioural (voice, gait, gesture) and soft-biometric (body silhouette, gender, age, race, tattoo) identifiers in multimedia documents.Peer reviewe

    MediaEval 2012 Visual Privacy Task: Applying Transform-domain Scrambling to Automatically Detected Faces

    Get PDF
    In this paper, we describe our approach and discuss evaluation results for the MediaEval 2012 Visual Privacy task. The goal of the task is to obscure faces of people visible in provided surveillance clips to preserve their personal privacy. We also additionally assume, although it is not explicitly stated in the task description, that the privacy protection should be done in an automated way and the applied privacy tool should be reversible and prone to attacks. We use a combination of a face detection algorithm and transform-domain scrambling technique, which pseudo-randomly scrambles bits during encoding, that was applied to the detected face regions. The evaluations of the resulted automated privacy protection tool showed that inaccuracies of the face detection algorithm affected both objective and subjective results. An interesting finding is how- ever that scrambling, while being non-distractive to the evaluating subjects, appeared significantly irritating with score of 0.8, but only for ’evening’ subset of the dataset

    Interdisciplinary perspectives on privacy awareness in lifelogging technology development

    Get PDF
    Population aging resulting from demographic changes requires some challenging decisions and necessary steps to be taken by different stakeholders to manage current and future demand for assistance and support. The consequences of population aging can be mitigated to some extent by assisting technologies that can support the autonomous living of older individuals and persons in need of care in their private environments as long as possible. A variety of technical solutions are already available on the market, but privacy protection is a serious, often neglected, issue when using such (assisting) technology. Thus, privacy needs to be thoroughly taken under consideration in this context. In a three-year project PAAL (‘Privacy-Aware and Acceptable Lifelogging Services for Older and Frail People’), researchers from different disciplines, such as law, rehabilitation, human-computer interaction, and computer science, investigated the phenomenon of privacy when using assistive lifelogging technologies. In concrete terms, the concept of Privacy by Design was realized using two exemplary lifelogging applications in private and professional environments. A user-centered empirical approach was applied to the lifelogging technologies, investigating the perceptions and attitudes of (older) users with different health-related and biographical profiles. The knowledge gained through the interdisciplinary collaboration can improve the implementation and optimization of assistive applications. In this paper, partners of the PAAL project present insights gained from their cross-national, interdisciplinary work regarding privacy-aware and acceptable lifelogging technologies.Open Access funding enabled and organized by Projekt DEAL. This work is part of the PAAL-project (“Privacy-Aware and Acceptable Lifelogging services for older and frail people”). The support of the Joint Programme Initiative “More Years, Better Lives” (award number: PAAL_JTC2017), the German Federal Ministry of Education and Research (grant no: 16SV7955), the Swedish Research Council for Health, Working Life, and Welfare (grant no: 2017–02302), the Spanish Agencia Estatal de Investigacion (PCIN-2017-114), the Italian Ministero dell’Istruzione dell’Universitá e della Ricerca, (CUP: I36G17000380001), and the Canadian Institutes of Health Research is gratefully acknowledged
    corecore