1,445 research outputs found

    A Computational Framework for Exploring and Mitigating Privacy Risks in Image-Based Emotion Recognition

    Get PDF
    Ambulatory devices and Image-based IoT devices have permeated our every-day life. Such technologies allow the continuous monitoring of individuals’ behavioral signals and expressions in every-day life, affording us new insights into their emotional states and transitions, thus paving the way to novel well-being and healthcare applications. Yet, due to the strong privacy concerns, the use of such technologies is met with strong skepticism as they deal with highly sensitive behavioral data, which regularly involve speech signals and facial images and current image-based emotion recognition systems relying on deep learning techniques tend to preserve substantial information related to the identity of the user which can be extracted or leaked to be used against the user itself. In this thesis, we examine the interplay between emotion-specific and user identity-specific information in image-based emotion recognition systems. We further propose a user anonymization approach that preserves emotion-specific information but eliminates user-dependent information from the convolutional kernel of convolutional neural networks (CNN), therefore reducing user re-identification risks. We formulate an iterative adversarial learning problem implemented with a multitask CNN, that minimizes emotion classification and maximizes user identification loss. The proposed system is evaluated on two datasets achieving moderate to high emotion recognition accuracy and poor user identity recognition accuracy, outperforming existing baseline approaches. Implications from this study can inform the design of privacy-aware behavioral recognition systems that preserve facets of human behavior, while concealing the identity of the user, and can be used in various IoT-empowered applications related to health, well-being, and education

    Emotion detection with privacy preservation using adversarial learning

    Get PDF
    The continuous monitoring of one's emotional state can provide valuable insights about their psychological well-being and can be used as a foundation for diagnosis and treatment applications. Yet, due to privacy concerns, technologies that continuously monitor signals that reflect emotions, such as images, are met with strong skepticism. This thesis aims to design a privacy-preserving image generation algorithm that anonymizes the input image and at the same time maintains emotion-related information. To do so, we identify landmarks in human faces and quantify the amount of emotion and identity based information carried by each of the landmarks. We then propose a modification of a conditional generative adversarial network that can transform facial images in such a way that the identity based information is ignored while the emotion based information is retained. We then evaluate the degree of emotion and identity content in the transformed images by performing emotion and identity classification using these images. The proposed system is trained and evaluated on two publicly available datasets, namely the Yale Face Database and the Japanese Female Facial Expression dataset, and the generated images achieve moderate to high emotion classification accuracy and low identity classification accuracy

    Privacy Intelligence: A Survey on Image Sharing on Online Social Networks

    Full text link
    Image sharing on online social networks (OSNs) has become an indispensable part of daily social activities, but it has also led to an increased risk of privacy invasion. The recent image leaks from popular OSN services and the abuse of personal photos using advanced algorithms (e.g. DeepFake) have prompted the public to rethink individual privacy needs when sharing images on OSNs. However, OSN image sharing itself is relatively complicated, and systems currently in place to manage privacy in practice are labor-intensive yet fail to provide personalized, accurate and flexible privacy protection. As a result, an more intelligent environment for privacy-friendly OSN image sharing is in demand. To fill the gap, we contribute a systematic survey of 'privacy intelligence' solutions that target modern privacy issues related to OSN image sharing. Specifically, we present a high-level analysis framework based on the entire lifecycle of OSN image sharing to address the various privacy issues and solutions facing this interdisciplinary field. The framework is divided into three main stages: local management, online management and social experience. At each stage, we identify typical sharing-related user behaviors, the privacy issues generated by those behaviors, and review representative intelligent solutions. The resulting analysis describes an intelligent privacy-enhancing chain for closed-loop privacy management. We also discuss the challenges and future directions existing at each stage, as well as in publicly available datasets.Comment: 32 pages, 9 figures. Under revie

    Privacy Enhanced Multimodal Neural Representations for Emotion Recognition

    Full text link
    Many mobile applications and virtual conversational agents now aim to recognize and adapt to emotions. To enable this, data are transmitted from users' devices and stored on central servers. Yet, these data contain sensitive information that could be used by mobile applications without user's consent or, maliciously, by an eavesdropping adversary. In this work, we show how multimodal representations trained for a primary task, here emotion recognition, can unintentionally leak demographic information, which could override a selected opt-out option by the user. We analyze how this leakage differs in representations obtained from textual, acoustic, and multimodal data. We use an adversarial learning paradigm to unlearn the private information present in a representation and investigate the effect of varying the strength of the adversarial component on the primary task and on the privacy metric, defined here as the inability of an attacker to predict specific demographic information. We evaluate this paradigm on multiple datasets and show that we can improve the privacy metric while not significantly impacting the performance on the primary task. To the best of our knowledge, this is the first work to analyze how the privacy metric differs across modalities and how multiple privacy concerns can be tackled while still maintaining performance on emotion recognition.Comment: 8 page

    StarGAN-VC++: Towards Emotion Preserving Voice Conversion Using Deep Embeddings

    Full text link
    Voice conversion (VC) transforms an utterance to sound like another person without changing the linguistic content. A recently proposed generative adversarial network-based VC method, StarGANv2-VC is very successful in generating natural-sounding conversions. However, the method fails to preserve the emotion of the source speaker in the converted samples. Emotion preservation is necessary for natural human-computer interaction. In this paper, we show that StarGANv2-VC fails to disentangle the speaker and emotion representations, pertinent to preserve emotion. Specifically, there is an emotion leakage from the reference audio used to capture the speaker embeddings while training. To counter the problem, we propose novel emotion-aware losses and an unsupervised method which exploits emotion supervision through latent emotion representations. The objective and subjective evaluations prove the efficacy of the proposed strategy over diverse datasets, emotions, gender, etc.Comment: Accepted in 12th Speech Synthesis Workshop (SSW), Satellite event in Interspeech 202
    • …
    corecore