36,611 research outputs found

    Tune in to your emotions: a robust personalized affective music player

    Get PDF
    The emotional power of music is exploited in a personalized affective music player (AMP) that selects music for mood enhancement. A biosignal approach is used to measure listeners’ personal emotional reactions to their own music as input for affective user models. Regression and kernel density estimation are applied to model the physiological changes the music elicits. Using these models, personalized music selections based on an affective goal state can be made. The AMP was validated in real-world trials over the course of several weeks. Results show that our models can cope with noisy situations and handle large inter-individual differences in the music domain. The AMP augments music listening where its techniques enable automated affect guidance. Our approach provides valuable insights for affective computing and user modeling, for which the AMP is a suitable carrier application

    Ubiquitous Emotion Analytics and How We Feel Today

    Get PDF
    Emotions are complicated. Humans feel deeply, and it can be hard to bring clarity to those depths, to communicate about feelings, or to understand others’ emotional states. Indeed, this emotional confusion is one of the biggest challenges of deciphering our humanity. However, a kind of hope might be on the horizon, in the form of emotion analytics: computerized tools for recognizing and responding to emotion. This analysis explores how emotion analytics may reflect the current status of humans’ regard for emotion. Emotion need no longer be a human sense of vague, indefinable feelings; instead, emotion is in the process of becoming a legible, standardized commodity that can be sold, managed, and altered to suit the needs of those in power. Emotional autonomy and authority can be surrendered to those technologies in exchange for perceived self-determination. Emotion analytics promises a new orderliness to the messiness of human emotions, suggesting that our current state of emotional uncertainty is inadequate and intolerable

    Exploring social music behaviour: An investigation of music selection at parties

    Get PDF
    This paper builds an understanding how music is currently listened to by small (fewer than 10 individuals) to medium-sized (10 to 40 individuals) gatherings of people— how songs are chosen for playing, how the music fits in with other activities of group members, who supplies the music, the hardware/software that supports song selection and presentation. This fine-grained context emerges from a qualitative analysis of a rich set of participant observations and interviews focusing on the selection of songs to play at social gatherings. We suggest features for software to support music playing at parties

    Will mobile video become the killer application for 3G? - an empirical model for media convergence

    Get PDF
    Mobile carriers have continually rolled out 3G mobile video applications to increase their revenue and profits. The presumption is that video is superior to the already successful SMS, ringtones, and pictures, and can create greater value to users. However, recent market surveys revealed contradicting results. Motivated by this discrepancy, we propose in this paper a parsimonious model for user acceptance of mobile entertainment as digital convergence. Integrating research on Information Systems, Flow, and Media Psychology, we take a unique approach to user acceptance of digital convergence - platform migration. Our key proposition is that the interaction between media types and the platform-specific constraints is the key determinant of user evaluation. Particularly, users' involvement in the media is determined by both the entertaining time span on the original platform and the attentional constraint of the new platform. The mismatch between the two spans can result in lower level involvement, which in turn cause no or even negative user emotional responses. The model was tested with empirical data. We discuss the theoretical contributions, strategic and design implications, and future research directions derived from this theoretical framewor

    Social music in cars

    Get PDF
    This paper builds an understanding of how music is currently experienced by a social group travelling together in a car - how songs are chosen for playing, how music both reflects and influences the group’s mood and social interaction, who supplies the music, the hardware/software that supports song selection and presentation. This fine-grained context emerges from a qualitative analysis of a rich set of ethnographic data (participant observations and interviews) focusing primarily on the experience of in-car music on moderate length and long trips. We suggest features and functionality for music software to enhance the social experience when travelling in cars, and prototype and test a user interface based on design suggestions drawn from the data

    What does not happen: quantifying embodied engagement using NIMI and self-adaptors

    Get PDF
    Previous research into the quantification of embodied intellectual and emotional engagement using non-verbal movement parameters has not yielded consistent results across different studies. Our research introduces NIMI (Non-Instrumental Movement Inhibition) as an alternative parameter. We propose that the absence of certain types of possible movements can be a more holistic proxy for cognitive engagement with media (in seated persons) than searching for the presence of other movements. Rather than analyzing total movement as an indicator of engagement, our research team distinguishes between instrumental movements (i.e. physical movement serving a direct purpose in the given situation) and non-instrumental movements, and investigates them in the context of the narrative rhythm of the stimulus. We demonstrate that NIMI occurs by showing viewers’ movement levels entrained (i.e. synchronised) to the repeating narrative rhythm of a timed computer-presented quiz. Finally, we discuss the role of objective metrics of engagement in future context-aware analysis of human behaviour in audience research, interactive media and responsive system and interface design

    Information access tasks and evaluation for personal lifelogs

    Get PDF
    Emerging personal lifelog (PL) collections contain permanent digital records of information associated with individuals’ daily lives. This can include materials such as emails received and sent, web content and other documents with which they have interacted, photographs, videos and music experienced passively or created, logs of phone calls and text messages, and also personal and contextual data such as location (e.g. via GPS sensors), persons and objects present (e.g. via Bluetooth) and physiological state (e.g. via biometric sensors). PLs can be collected by individuals over very extended periods, potentially running to many years. Such archives have many potential applications including helping individuals recover partial forgotten information, sharing experiences with friends or family, telling the story of one’s life, clinical applications for the memory impaired, and fundamental psychological investigations of memory. The Centre for Digital Video Processing (CDVP) at Dublin City University is currently engaged in the collection and exploration of applications of large PLs. We are collecting rich archives of daily life including textual and visual materials, and contextual context data. An important part of this work is to consider how the effectiveness of our ideas can be measured in terms of metrics and experimental design. While these studies have considerable similarity with traditional evaluation activities in areas such as information retrieval and summarization, the characteristics of PLs mean that new challenges and questions emerge. We are currently exploring the issues through a series of pilot studies and questionnaires. Our initial results indicate that there are many research questions to be explored and that the relationships between personal memory, context and content for these tasks is complex and fascinating
    corecore