31,588 research outputs found

    Damage to Association Fiber Tracts Impairs Recognition of the Facial Expression of Emotion

    Get PDF
    An array of cortical and subcortical structures have been implicated in the recognition of emotion from facial expressions. It remains unknown how these regions communicate as parts of a system to achieve recognition, but white matter tracts are likely critical to this process. We hypothesized that (1) damage to white matter tracts would be associated with recognition impairment and (2) the degree of disconnection of association fiber tracts [inferior longitudinal fasciculus (ILF) and/or inferior fronto-occipital fasciculus (IFOF)] connecting the visual cortex with emotion-related regions would negatively correlate with recognition performance. One hundred three patients with focal, stable brain lesions mapped onto a reference brain were tested on their recognition of six basic emotional facial expressions. Association fiber tracts from a probabilistic atlas were coregistered to the reference brain. Parameters estimating disconnection were entered in a general linear model to predict emotion recognition impairments, accounting for lesion size and cortical damage. Damage associated with the right IFOF significantly predicted an overall facial emotion recognition impairment and specific impairments for sadness, anger, and fear. One subject had a pure white matter lesion in the location of the right IFOF and ILF. He presented specific, unequivocal emotion recognition impairments. Additional analysis suggested that impairment in fear recognition can result from damage to the IFOF and not the amygdala. Our findings demonstrate the key role of white matter association tracts in the recognition of the facial expression of emotion and identify specific tracts that may be most critical

    The color of smiling: computational synaesthesia of facial expressions

    Get PDF
    This note gives a preliminary account of the transcoding or rechanneling problem between different stimuli as it is of interest for the natural interaction or affective computing fields. By the consideration of a simple example, namely the color response of an affective lamp to a sensed facial expression, we frame the problem within an information- theoretic perspective. A full justification in terms of the Information Bottleneck principle promotes a latent affective space, hitherto surmised as an appealing and intuitive solution, as a suitable mediator between the different stimuli.Comment: Submitted to: 18th International Conference on Image Analysis and Processing (ICIAP 2015), 7-11 September 2015, Genova, Ital

    A fuzzy-based approach for classifying students' emotional states in online collaborative work

    Get PDF
    (c) 2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Emotion awareness is becoming a key aspect in collaborative work at academia, enterprises and organizations that use collaborative group work in their activity. Due to pervasiveness of ICT's, most of collaboration can be performed through communication media channels such as discussion forums, social networks, etc. The emotive state of the users while they carry out their activity such as collaborative learning at Universities or project work at enterprises and organizations influences very much their performance and can actually determine the final learning or project outcome. Therefore, monitoring the users' emotive states and using that information for providing feedback and scaffolding is crucial. To this end, automated analysis over data collected from communication channels is a useful source. In this paper, we propose an approach to process such collected data in order to classify and assess emotional states of involved users and provide them feedback accordingly to their emotive states. In order to achieve this, a fuzzy approach is used to build the emotive classification system, which is fed with data from ANEW dictionary, whose words are bound to emotional weights and these, in turn, are used to map Fuzzy sets in our proposal. The proposed fuzzy-based system has been evaluated using real data from collaborative learning courses in an academic context.Peer ReviewedPostprint (author's final draft

    Multimodal Content Analysis for Effective Advertisements on YouTube

    Full text link
    The rapid advances in e-commerce and Web 2.0 technologies have greatly increased the impact of commercial advertisements on the general public. As a key enabling technology, a multitude of recommender systems exists which analyzes user features and browsing patterns to recommend appealing advertisements to users. In this work, we seek to study the characteristics or attributes that characterize an effective advertisement and recommend a useful set of features to aid the designing and production processes of commercial advertisements. We analyze the temporal patterns from multimedia content of advertisement videos including auditory, visual and textual components, and study their individual roles and synergies in the success of an advertisement. The objective of this work is then to measure the effectiveness of an advertisement, and to recommend a useful set of features to advertisement designers to make it more successful and approachable to users. Our proposed framework employs the signal processing technique of cross modality feature learning where data streams from different components are employed to train separate neural network models and are then fused together to learn a shared representation. Subsequently, a neural network model trained on this joint feature embedding representation is utilized as a classifier to predict advertisement effectiveness. We validate our approach using subjective ratings from a dedicated user study, the sentiment strength of online viewer comments, and a viewer opinion metric of the ratio of the Likes and Views received by each advertisement from an online platform.Comment: 11 pages, 5 figures, ICDM 201

    Multimodal music information processing and retrieval: survey and future challenges

    Full text link
    Towards improving the performance in various music information processing tasks, recent studies exploit different modalities able to capture diverse aspects of music. Such modalities include audio recordings, symbolic music scores, mid-level representations, motion, and gestural data, video recordings, editorial or cultural tags, lyrics and album cover arts. This paper critically reviews the various approaches adopted in Music Information Processing and Retrieval and highlights how multimodal algorithms can help Music Computing applications. First, we categorize the related literature based on the application they address. Subsequently, we analyze existing information fusion approaches, and we conclude with the set of challenges that Music Information Retrieval and Sound and Music Computing research communities should focus in the next years
    • …
    corecore