3 research outputs found

    Finding emotional-laden resources on the World Wide Web

    Get PDF
    Some content in multimedia resources can depict or evoke certain emotions in users. The aim of Emotional Information Retrieval (EmIR) and of our research is to identify knowledge about emotional-laden documents and to use these findings in a new kind of World Wide Web information service that allows users to search and browse by emotion. Our prototype, called Media EMOtion SEarch (MEMOSE), is largely based on the results of research regarding emotive music pieces, images and videos. In order to index both evoked and depicted emotions in these three media types and to make them searchable, we work with a controlled vocabulary, slide controls to adjust the emotions’ intensities, and broad folksonomies to identify and separate the correct resource-specific emotions. This separation of so-called power tags is based on a tag distribution which follows either an inverse power law (only one emotion was recognized) or an inverse-logistical shape (two or three emotions were recognized). Both distributions are well known in information science. MEMOSE consists of a tool for tagging basic emotions with the help of slide controls, a processing device to separate power tags, a retrieval component consisting of a search interface (for any topic in combination with one or more emotions) and a results screen. The latter shows two separately ranked lists of items for each media type (depicted and felt emotions), displaying thumbnails of resources, ranked by the mean values of intensity. In the evaluation of the MEMOSE prototype, study participants described our EmIR system as an enjoyable Web 2.0 service

    Music video affective understanding using feature importance analysis

    No full text
    Music video is a popular type of entertainment by viewers. Currently, the novel indexing and retrieval approach based on the affective cues contained in music videos becomes more and more attractive to users. Music video affective analysis and understanding is one of the most popular topics in current multimedia community. In this paper, we propose a novel feature importance analysis approach to select most representative arousal and valence features for arousal and valence modeling. Compared with state-of-the-art work by Zhang on music video affective analysis, our main contributions are in the following aspects: (1) Another 3 affect-related features are extracted to enrich the feature set and exploit their correlation with arousal and valence. (2) All extracted features are ordered via feature importance analysis, and then optimal feature subset is selected after ordering. (3) Different regression methods are compared for arousal and valence modeling in order to find the fittest estimation function. Our method achieves 33.39% and 42.17% deduction in terms of mean absolute error compared with Zhang's method. Experimental results demonstrate our proposed method has a considerable improvement on music video affective understanding
    corecore