744 research outputs found

    Voice activity detection in eco-acoustic data enables privacy protection and is a proxy for human disturbance

    Get PDF
    1. Eco-acoustic monitoring is increasingly being used to map biodiversity across large scales, yet little thought is given to the privacy concerns and potential scientific value of inadvertently recorded human speech. Automated speech de tection is possible using voice activity detection (VAD) models, but it is not clear how well these perform in diverse natural soundscapes. In this study we pre sent the first evaluation of VAD models for anonymization of eco-acoustic data and demonstrate how speech detection frequency can be used as one potential measure of human disturbance. 2. We first generated multiple synthetic datasets using different data preprocess ing techniques to train and validate deep neural network models. We evaluated the performance of our custom models against existing state-of-the-art VAD models using playback experiments with speech samples from a man, woman and child. Finally, we collected long-term data from a Norwegian forest heavily used for hiking to evaluate the ability of the models to detect human speech and quantify a proxy for human disturbance in a real monitoring scenario. 3. In playback experiments, all models could detect human speech with high accu racy at distances where the speech was intelligible (up to 10 m). We showed that training models using location specific soundscapes in the data preprocessing step resulted in a slight improvement in model performance. Additionally, we found that the number of speech detections correlated with peak traffic hours (using bus timings) demonstrating how VAD can be used to derive a proxy for human disturbance with fine temporal resolution. 4. Anonymizing audio data effectively using VAD models will allow eco-acoustic monitoring to continue to deliver invaluable ecological insight at scale, while minimizing the risk of data misuse. Furthermore, using speech detections as a proxy for human disturbance opens new opportunities for eco-acoustic moni toring to shed light on nuanced human–wildlife interactionspublishedVersio

    Privacy-Protecting Techniques for Behavioral Data: A Survey

    Get PDF
    Our behavior (the way we talk, walk, or think) is unique and can be used as a biometric trait. It also correlates with sensitive attributes like emotions. Hence, techniques to protect individuals privacy against unwanted inferences are required. To consolidate knowledge in this area, we systematically reviewed applicable anonymization techniques. We taxonomize and compare existing solutions regarding privacy goals, conceptual operation, advantages, and limitations. Our analysis shows that some behavioral traits (e.g., voice) have received much attention, while others (e.g., eye-gaze, brainwaves) are mostly neglected. We also find that the evaluation methodology of behavioral anonymization techniques can be further improved

    Introducing the VoicePrivacy initiative

    Get PDF
    International audienceThe VoicePrivacy initiative aims to promote the development of privacy preservation tools for speech technology by gathering a new community to define the tasks of interest and the evaluation methodology, and benchmarking solutions through a series of challenges. In this paper, we formulate the voice anonymization task selected for the VoicePrivacy 2020 Challenge and describe the datasets used for system development and evaluation. We also present the attack models and the associated objective and subjective evaluation metrics. We introduce two anonymization baselines and report objective evaluation results

    A False Sense of Privacy: Towards a Reliable Evaluation Methodology for the Anonymization of Biometric Data

    Get PDF
    Biometric data contains distinctive human traits such as facial features or gait patterns. The use of biometric data permits an individuation so exact that the data is utilized effectively in identification and authentication systems. But for this same reason, privacy protections become indispensably necessary. Privacy protection is extensively afforded by the technique of anonymization. Anonymization techniques protect sensitive personal data from biometrics by obfuscating or removing information that allows linking records to the generating individuals, to achieve high levels of anonymity. However, our understanding and possibility to develop effective anonymization relies, in equal parts, on the effectiveness of the methods employed to evaluate anonymization performance. In this paper, we assess the state-of-the-art methods used to evaluate the performance of anonymization techniques for facial images and for gait patterns. We demonstrate that the state-of-the-art evaluation methods have serious and frequent shortcomings. In particular, we find that the underlying assumptions of the state-of-the-art are quite unwarranted. State-of-the-art methods generally assume a difficult recognition scenario and thus a weak adversary. However, that assumption causes state-of-the-art evaluations to grossly overestimate the performance of the anonymization. Therefore, we propose a strong adversary which is aware of the anonymization in place. This adversary model implements an appropriate measure of anonymization performance. We improve the selection process for the evaluation dataset, and we reduce the numbers of identities contained in the dataset while ensuring that these identities remain easily distinguishable from one another. Our novel evaluation methodology surpasses the state-of-the-art because we measure worst-case performance and so deliver a highly reliable evaluation of biometric anonymization techniques

    Ethics in Action: Anonymization as a Participant's Concern and a Participant's Practice

    Get PDF
    Ethical issues are often discussed in a normative, prescriptive, generic way, within methodological recommendations and ethical guidelines. Within social sciences dealing with social interaction, these ethical issues concern the approach of participants during fieldwork, the recordings of audio-video data, their transcription, and their analysis. This paper offers a respecification (in an ethnomethodological sense) of these issues by addressing them in a double perspective: as a topic for research—and not just as a methodological resource—; as a members' concern and not as (only) a researchers' problem. In order to do so, the paper focuses on a particular ethical problem, which has not yet been submitted to analytical scrutiny: the anonymization of the participants. It studies the way in which participants treat their recorded actions as "delicate,” and therefore as having to be "anonymized”; as well as the way in which participants implement their practical solutions for the anonymization—by "erasing” or ‘anonymizing' themselves the recording within the course of their situated action. Adopting the perspective of conversation analysis and ethnomethodology, the paper explores these issues through a sequential analysis identifying the particular moments within social interaction in which problems are pointed at by the participants and the way in which they are locally managed by them

    Emotion detection with privacy preservation using adversarial learning

    Get PDF
    The continuous monitoring of one's emotional state can provide valuable insights about their psychological well-being and can be used as a foundation for diagnosis and treatment applications. Yet, due to privacy concerns, technologies that continuously monitor signals that reflect emotions, such as images, are met with strong skepticism. This thesis aims to design a privacy-preserving image generation algorithm that anonymizes the input image and at the same time maintains emotion-related information. To do so, we identify landmarks in human faces and quantify the amount of emotion and identity based information carried by each of the landmarks. We then propose a modification of a conditional generative adversarial network that can transform facial images in such a way that the identity based information is ignored while the emotion based information is retained. We then evaluate the degree of emotion and identity content in the transformed images by performing emotion and identity classification using these images. The proposed system is trained and evaluated on two publicly available datasets, namely the Yale Face Database and the Japanese Female Facial Expression dataset, and the generated images achieve moderate to high emotion classification accuracy and low identity classification accuracy
    • 

    corecore