1,909 research outputs found

    Learning to Behave: Internalising Knowledge

    Get PDF

    Finding emotional-laden resources on the World Wide Web

    Get PDF
    Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE), is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized) or an inverse-logistical shape (two or three emotions were recognized). Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions) and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions), displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service

    Sensoring a Generative System to Create User-Controlled Melodies

    Get PDF
    [EN] The automatic generation of music is an emergent field of research that has attracted the attention of countless researchers. As a result, there is a broad spectrum of state of the art research in this field. Many systems have been designed to facilitate collaboration between humans and machines in the generation of valuable music. This research proposes an intelligent system that generates melodies under the supervision of a user, who guides the process through a mechanical device. The mechanical device is able to capture the movements of the user and translate them into a melody. The system is based on a Case-Based Reasoning (CBR) architecture, enabling it to learn from previous compositions and to improve its performance over time. The user uses a device that allows them to adapt the composition to their preferences by adjusting the pace of a melody to a specific context or generating more serious or acute notes. Additionally, the device can automatically resist some of the user’s movements, this way the user learns how they can create a good melody. Several experiments were conducted to analyze the quality of the system and the melodies it generates. According to the users’ validation, the proposed system can generate music that follows a concrete style. Most of them also believed that the partial control of the device was essential for the quality of the generated music

    Computational and Psycho-Physiological Investigations of Musical Emotions

    Get PDF
    The ability of music to stir human emotions is a well known fact (Gabrielsson & Lindstrom. 2001). However, the manner in which music contributes to those experiences remains obscured. One of the main reasons is the large number of syndromes that characterise emotional experiences. Another is their subjective nature: musical emotions can be affected by memories, individual preferences and attitudes, among other factors (Scherer & Zentner, 2001). But can the same music induce similar affective experiences in all listeners, somehow independently of acculturation or personal bias? A considerable corpus of literature has consistently reported that listeners agree rather strongly about what type of emotion is expressed in a particular piece or even in particular moments or sections (Juslin & Sloboda, 2001). Those studies suggest that music features encode important characteristics of affective experiences, by suggesting the influence of various structural factors of music on emotional expression. Unfortunately, the nature of these relationships is complex, and it is common to find rather vague and contradictory descriptions. This thesis presents a novel methodology to analyse the dynamics of emotional responses to music. It consists of a computational investigation, based on spatiotemporal neural networks sensitive to structural aspects of music, which "mimic" human affective responses to music and permit to predict new ones. The dynamics of emotional responses to music are investigated as computational representations of perceptual processes (psychoacoustic features) and self-perception of physiological activation (peripheral feedback). Modelling and experimental results provide evidence suggesting that spatiotemporal patterns of sound resonate with affective features underlying judgements of subjective feelings. A significant part of the listener's affective response is predicted from the a set of six psychoacoustic features of sound - tempo, loudness, multiplicity (texture), power spectrum centroid (mean pitch), sharpness (timbre) and mean STFT flux (pitch variation) - and one physiological variable - heart rate. This work contributes to new evidence and insights to the study of musical emotions, with particular relevance to the music perception and emotion research communities
    • …
    corecore