11 research outputs found

    Crowdsourcing Approach for Evaluation of Privacy Filters in Video Surveillance

    Get PDF
    Extensive adoption of video surveillance, affecting many aspects of the daily life, alarms the concerned public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks. In this paper, we propose conducting a subjective evaluation using crowdsourcing to analyze the tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. As an example, the proposed method is used to compare several commonly employed privacy protection techniques, such as blurring, pixelization, and masking applied to indoor surveillance video. Facebook based crowdsourcing application was specifically developed to gather the subjective evaluation data. Based on more than one hundred participants, the evaluation results demonstrate that the pixelization filter provides the best performance in terms of balance between privacy protection and intelligibility. The results obtained with crowdsourcing application were compared with results of previous work using more conventional subjective tests showing that they are highly correlated

    PEViD: privacy evaluation video dataset

    Get PDF
    Visual privacy protection, i.e., obfuscation of personal visual information in video surveillance is an important and increasingly popular research topic. However, while many datasets are available for testing performance of various video analytics, little to nothing exists for evaluation of visual privacy tools. Since surveillance and privacy protection have contradictory objectives, the design principles of corresponding evaluation datasets should differ too. In this paper, we outline principles that need to be considered when building a dataset for privacy evaluation. Following these principles, we present new, and the first to our knowledge, Privacy Evaluation Video Dataset (PEViD). With the dataset, we provide XML-based annotations of various privacy regions, including face, accessories, skin regions, hair, body silhouette, and other personal information, and their descriptions. Via preliminary subjective tests, we demonstrate the flexibility and suitability of the dataset for privacy evaluations. The evaluation results also show the importance of secondary privacy regions that contain non-facial personal information for privacy-intelligibility tradeoff. We believe that PEViD dataset is equally suitable for evaluations of privacy protection tools using objective metrics and subjective assessments

    CrowdOut: un service de crowdsourcing pour la sécurité routière dans les villes numériques

    Get PDF
    National audienceNowadays cities invest more in their services, and particu- larly digital ones, to improve their resident's quality of life and attract more people. Thus, new crowdsourcing services appear and they are based on contributions made by mobile users. For example, the respect of the tra c code is essential for ensure citizens' security and welfare in their city. With this in mind, we proposed CrowdOut, a new crowdsourcing mobile service for road security in cities. Crowdout allows users to report tra c o ence they witness in real time and to map them on a city plan. CrowdOut service is implemented, and preliminary tests and demonstrations have been per- formed in the urban environment of the Grand Nancy. This service allows users to appropriate their urban environment with an active participation regarding the collectivity. This service also represents a tool for administrator to help for decisions and improve their policy of urbanization, or check the impact of their policy decision in the city environment.A l'heure o u les villes investissent de plus en plus dans leurs services, et notamment num eriques, pour am eliorer la qual- it e de vie de leurs r esidents et en attirer d'autres, de nou- veaux services de crowdsourcing apparaissent et reposent sur les contributions d'utilisateurs mobiles equip es de smart- phones. Le respect du code de la route est par exemple essentiel pour garantir la s ecurit e des citoyens et leur bien ^etre dans leur cit e. Dans ce but, nous proposons Crowd- Out, un nouveau service de crowdsourcing mobile pour la s ecurit e routi ere dans les villes. CrowdOut permet aux util- isateurs de reporter en temps r eel les infractions routi eres dont ils sont t emoins et les cartographier sur un plan de la ville. Le service CrowdOut a et e impl ement e et des pre- mi eres exp eriences et d emonstrations ont et e r ealis ees dans l'environnement urbain du Grand Nancy. Ce service per- met a la fois aux utilisateurs de s'approprier leur environ- nement urbain en participant activement a la collectivit e. Cela repr esente egalement une aide a la d ecision pour les administrateurs a n d'am eliorer leurs plans d'urbanismes ou v eri er l'impact de leurs d ecisions politiques dans la ville

    Impact of Tone-mapping Algorithms on Subjective and Objective Face Recognition in HDR Images

    Get PDF
    Crowdsourcing is a popular tool for conducting subjective evaluations in uncontrolled environments and at low cost. In this paper, a crowdsourcing study is conducted to investigate the impact of High Dynamic Range (HDR) imaging on subjective face recognition accuracy. For that purpose, a dataset of HDR images of people depicted in high-contrast lighting conditions was created and their faces were manually cropped to construct a probe set of faces. Crowdsourcing-based face recognition was conducted for five differently tone-mapped versions of HDR faces and were compared to face recognition in a typical Low Dynamic Range alternative. A similar experiment was also conducted using three automatic face recognition algorithms. The comparative analysis results of face recognition by human subjects through crowdsourcing and machine vision face recognition show that HDR imaging affects the recognition results of human and computer vision approaches differently

    The impact of privacy protection filters on gender recognition

    Get PDF
    Deep learning-based algorithms have become increasingly efficient in recognition and detection tasks, especially when they are trained on large-scale datasets. Such recent success has led to a speculation that deep learning methods are comparable to or even outperform human visual system in its ability to detect and recognize objects and their features. In this paper, we focus on the specific task of gender recognition in images when they have been processed by privacy protection filters (e.g., blurring, masking, and pixelization) applied at different strengths. Assuming a privacy protection scenario, we compare the performance of state of the art deep learning algorithms with a subjective evaluation obtained via crowdsourcing to understand how privacy protection filters affect both machine and human vision

    Crowdsourcing-based Evaluation of Privacy in HDR Images

    Get PDF
    The ability of High Dynamic Range imaging (HDRi) to capture details in high-contrast environments, making both dark and bright regions clearly visible, has a strong implication on privacy. However, the extent to which HDRi affects privacy when it is used instead of typical Standard Dynamic Range imaging (SDRi) is not yet clear. In this paper, we investigate the effect of HDRi on privacy via crowdsourcing evaluation using the Microworkers platform. Due to the lack of HDRi standard privacy evaluation dataset, we have created such dataset containing people of varying gender, race, and age, shot indoor and outdoor and under large range of lighting conditions. We evaluate the tone-mapped versions of these images, obtained by several representative tone-mapping algorithms, using subjective privacy evaluation methodology. Evaluation was performed using crowdsourcing-based framework, because it is a popular and effective alternative to traditional lab-based assessment. The results of the experiments demonstrate a significant loss of privacy when even tone-mapped versions of HDR images are used compared to typical SDR images shot with a standard exposure

    A framework for objective evaluation of privacy filters in video surveillance

    Get PDF
    Extensive adoption of video surveillance, affecting many aspects of our daily lives, alarms the public about the increasing invasion into personal privacy. To address these concerns, many tools have been proposed for protection of personal privacy in image and video. However, little is understood regarding the effectiveness of such tools and especially their impact on the underlying surveillance tasks, leading to a tradeoff between the preservation of privacy offered by these tools and the intelligibility of activities under video surveillance. In this paper, we investigate this privacy-intelligibility tradeoff objectively by proposing an objective framework for evaluation of privacy filters. We apply the proposed framework on a use case where privacy of people is protected by obscuring faces, assuming an automated video surveillance system. We used several popular privacy protection filters, such as blurring, pixelization, and masking and applied them with varying strengths to people's faces from different public datasets of video surveillance footage. Accuracy of face detection algorithm was used as a measure of intelligibility (a face should be detected to perform a surveillance task), and accuracy of face recognition algorithm as a measure of privacy (a specific person should not be identified). Under these conditions, after application of an ideal privacy protection tool, an obfuscated face would be visible as a face but would not be correctly identified by the recognition algorithm. The experiments demonstrate that, in general, an increase in strength of privacy filters under consideration leads to an increase in privacy (i.e., reduction in recognition accuracy) and to a decrease in intelligibility (i.e., reduction in detection accuracy). Masking also shows to be the most favorable filter across all tested datasets

    Averting Robot Eyes

    Get PDF
    Home robots will cause privacy harms. At the same time, they can provide beneficial services—as long as consumers trust them. This Essay evaluates potential technological solutions that could help home robots keep their promises, avert their eyes, and otherwise mitigate privacy harms. Our goals are to inform regulators of robot-related privacy harms and the available technological tools for mitigating them, and to spur technologists to employ existing tools and develop new ones by articulating principles for avoiding privacy harms. We posit that home robots will raise privacy problems of three basic types: (1) data privacy problems; (2) boundary management problems; and (3) social/relational problems. Technological design can ward off, if not fully prevent, a number of these harms. We propose five principles for home robots and privacy design: data minimization, purpose specifications, use limitations, honest anthropomorphism, and dynamic feedback and participation. We review current research into privacy-sensitive robotics, evaluating what technological solutions are feasible and where the harder problems lie. We close by contemplating legal frameworks that might encourage the implementation of such design, while also recognizing the potential costs of regulation at these early stages of the technology
    corecore