2,177 research outputs found

    VirtualIdentity : privacy preserving user profiling

    Get PDF
    User profiling from user generated content (UGC) is a common practice that supports the business models of many social media companies. Existing systems require that the UGC is fully exposed to the module that constructs the user profiles. In this paper we show that it is possible to build user profiles without ever accessing the user's original data, and without exposing the trained machine learning models for user profiling - which are the intellectual property of the company - to the users of the social media site. We present VirtualIdentity, an application that uses secure multi-party cryptographic protocols to detect the age, gender and personality traits of users by classifying their user-generated text and personal pictures with trained support vector machine models in a privacy preserving manner

    Personalized Speech-driven Expressive 3D Facial Animation Synthesis with Style Control

    Full text link
    Different people have different facial expressions while speaking emotionally. A realistic facial animation system should consider such identity-specific speaking styles and facial idiosyncrasies to achieve high-degree of naturalness and plausibility. Existing approaches to personalized speech-driven 3D facial animation either use one-hot identity labels or rely-on person specific models which limit their scalability. We present a personalized speech-driven expressive 3D facial animation synthesis framework that models identity specific facial motion as latent representations (called as styles), and synthesizes novel animations given a speech input with the target style for various emotion categories. Our framework is trained in an end-to-end fashion and has a non-autoregressive encoder-decoder architecture with three main components: expression encoder, speech encoder and expression decoder. Since, expressive facial motion includes both identity-specific style and speech-related content information; expression encoder first disentangles facial motion sequences into style and content representations, respectively. Then, both of the speech encoder and the expression decoders input the extracted style information to update transformer layer weights during training phase. Our speech encoder also extracts speech phoneme label and duration information to achieve better synchrony within the non-autoregressive synthesis mechanism more effectively. Through detailed experiments, we demonstrate that our approach produces temporally coherent facial expressions from input speech while preserving the speaking styles of the target identities.Comment: 8 page

    EMERALD—Exercise Monitoring Emotional Assistant

    Get PDF
    The increase in the elderly population in today’s society entails the need for new policies to maintain an adequate level of care without excessively increasing social spending. One of the possible options is to promote home care for the elderly. In this sense, this paper introduces a personal assistant designed to help elderly people in their activities of daily living. This system, called EMERALD, is comprised of a sensing platform and different mechanisms for emotion detection and decision-making that combined produces a cognitive assistant that engages users in Active Aging. The contribution of the paper is twofold—on the one hand, the integration of low-cost sensors that among other characteristics allows for detecting the emotional state of the user at an affordable cost; on the other hand, an automatic activity suggestion module that engages the users, mainly oriented to the elderly, in a healthy lifestyle. Moreover, by continuously correcting the system using the on-line monitoring carried out through the sensors integrated in the system, the system is personalized, and, in broad terms, emotionally intelligent. A functional prototype is being currently tested in a daycare centre in the northern area of Portugal where preliminary tests show positive results.This research was partially funded by the Fundacao para a Ciencia e Tecnologia (FCT) within the projects UID/CEC/00319/2019 and Post-Doc Grant SFRH/BPD/102696/2014 (Angelo Costa). This work is also partially funded by the MINECO/FEDER TIN2015-65515-C4-1-R and RISEWISE (RISEWomen with disabilities In Social Engagement) EU project under Agreement No. 690874.info:eu-repo/semantics/publishedVersio
    • …
    corecore