33,508 research outputs found

    How are you doing? : emotions and personality in Facebook

    Get PDF
    User generated content on social media sites is a rich source of information about latent variables of their users. Proper mining of this content provides a shortcut to emotion and personality detection of users without filling out questionnaires. This in turn increases the application potential of personalized services that rely on the knowledge of such latent variables. In this paper we contribute to this emerging domain by studying the relation between emotions expressed in approximately 1 million Facebook (FB) status updates and the users' age, gender and personality. Additionally, we investigate the relations between emotion expression and the time when the status updates were posted. In particular, we find that female users are more emotional in their status posts than male users. In addition, we find a relation between age and sharing of emotions. Older FB users share their feelings more often than young users. In terms of seasons, people post about emotions less frequently in summer. On the other hand, December is a time when people are more likely to share their positive feelings with their friends. We also examine the relation between users' personality and their posts. We find that users who have an open personality express their emotions more frequently, while neurotic users are more reserved to share their feelings

    A Trip to the Moon: Personalized Animated Movies for Self-reflection

    Full text link
    Self-tracking physiological and psychological data poses the challenge of presentation and interpretation. Insightful narratives for self-tracking data can motivate the user towards constructive self-reflection. One powerful form of narrative that engages audience across various culture and age groups is animated movies. We collected a week of self-reported mood and behavior data from each user and created in Unity a personalized animation based on their data. We evaluated the impact of their video in a randomized control trial with a non-personalized animated video as control. We found that personalized videos tend to be more emotionally engaging, encouraging greater and lengthier writing that indicated self-reflection about moods and behaviors, compared to non-personalized control videos

    Current Challenges and Visions in Music Recommender Systems Research

    Full text link
    Music recommender systems (MRS) have experienced a boom in recent years, thanks to the emergence and success of online streaming services, which nowadays make available almost all music in the world at the user's fingertip. While today's MRS considerably help users to find interesting music in these huge catalogs, MRS research is still facing substantial challenges. In particular when it comes to build, incorporate, and evaluate recommendation strategies that integrate information beyond simple user--item interactions or content-based descriptors, but dig deep into the very essence of listener needs, preferences, and intentions, MRS research becomes a big endeavor and related publications quite sparse. The purpose of this trends and survey article is twofold. We first identify and shed light on what we believe are the most pressing challenges MRS research is facing, from both academic and industry perspectives. We review the state of the art towards solving these challenges and discuss its limitations. Second, we detail possible future directions and visions we contemplate for the further evolution of the field. The article should therefore serve two purposes: giving the interested reader an overview of current challenges in MRS research and providing guidance for young researchers by identifying interesting, yet under-researched, directions in the field

    Tune in to your emotions: a robust personalized affective music player

    Get PDF
    The emotional power of music is exploited in a personalized affective music player (AMP) that selects music for mood enhancement. A biosignal approach is used to measure listeners’ personal emotional reactions to their own music as input for affective user models. Regression and kernel density estimation are applied to model the physiological changes the music elicits. Using these models, personalized music selections based on an affective goal state can be made. The AMP was validated in real-world trials over the course of several weeks. Results show that our models can cope with noisy situations and handle large inter-individual differences in the music domain. The AMP augments music listening where its techniques enable automated affect guidance. Our approach provides valuable insights for affective computing and user modeling, for which the AMP is a suitable carrier application

    Effect of Values and Technology Use on Exercise: Implications for Personalized Behavior Change Interventions

    Full text link
    Technology has recently been recruited in the war against the ongoing obesity crisis; however, the adoption of Health & Fitness applications for regular exercise is a struggle. In this study, we present a unique demographically representative dataset of 15k US residents that combines technology use logs with surveys on moral views, human values, and emotional contagion. Combining these data, we provide a holistic view of individuals to model their physical exercise behavior. First, we show which values determine the adoption of Health & Fitness mobile applications, finding that users who prioritize the value of purity and de-emphasize values of conformity, hedonism, and security are more likely to use such apps. Further, we achieve a weighted AUROC of .673 in predicting whether individual exercises, and we also show that the application usage data allows for substantially better classification performance (.608) compared to using basic demographics (.513) or internet browsing data (.546). We also find a strong link of exercise to respondent socioeconomic status, as well as the value of happiness. Using these insights, we propose actionable design guidelines for persuasive technologies targeting health behavior modification

    Ubiquitous emotion-aware computing

    Get PDF
    Emotions are a crucial element for personal and ubiquitous computing. What to sense and how to sense it, however, remain a challenge. This study explores the rare combination of speech, electrocardiogram, and a revised Self-Assessment Mannequin to assess people’s emotions. 40 people watched 30 International Affective Picture System pictures in either an office or a living-room environment. Additionally, their personality traits neuroticism and extroversion and demographic information (i.e., gender, nationality, and level of education) were recorded. The resulting data were analyzed using both basic emotion categories and the valence--arousal model, which enabled a comparison between both representations. The combination of heart rate variability and three speech measures (i.e., variability of the fundamental frequency of pitch (F0), intensity, and energy) explained 90% (p < .001) of the participants’ experienced valence--arousal, with 88% for valence and 99% for arousal (ps < .001). The six basic emotions could also be discriminated (p < .001), although the explained variance was much lower: 18–20%. Environment (or context), the personality trait neuroticism, and gender proved to be useful when a nuanced assessment of people’s emotions was needed. Taken together, this study provides a significant leap toward robust, generic, and ubiquitous emotion-aware computing
    corecore