1,141 research outputs found

    Fusion of Learned Multi-Modal Representations and Dense Trajectories for Emotional Analysis in Videos

    Get PDF
    When designing a video affective content analysis algorithm, one of the most important steps is the selection of discriminative features for the effective representation of video segments. The majority of existing affective content analysis methods either use low-level audio-visual features or generate handcrafted higher level representations based on these low-level features. We propose in this work to use deep learning methods, in particular convolutional neural networks (CNNs), in order to automatically learn and extract mid-level representations from raw data. To this end, we exploit the audio and visual modality of videos by employing Mel-Frequency Cepstral Coefficients (MFCC) and color values in the HSV color space. We also incorporate dense trajectory based motion features in order to further enhance the performance of the analysis. By means of multi-class support vector machines (SVMs) and fusion mechanisms, music video clips are classified into one of four affective categories representing the four quadrants of the Valence-Arousal (VA) space. Results obtained on a subset of the DEAP dataset show (1) that higher level representations perform better than low-level features, and (2) that incorporating motion information leads to a notable performance gain, independently from the chosen representation

    Affective Recommendation of Movies Based on Selected Connotative Features

    Get PDF
    The apparent difficulty in assessing emotions elicited by movies and the undeniable high variability in subjects emotional responses to filmic content have been recently tackled by exploring film connotative properties: the set of shooting and editing conventions that help in transmitting meaning to the audience. Connotation provides an intermediate representation which exploits the objectivity of audiovisual descriptors to predict the subjective emotional reaction of single users. This is done without the need of registering users physiological signals neither by employing other people highly variable emotional rates, but just relying on the inter-subjectivity of connotative concepts and on the knowledge of users reactions to similar stimuli. This work extends previous by extracting audiovisual and film grammar descriptors and, driven by users rates on connotative properties, creates a shared framework where movie scenes are placed, compared and recommended according to connotation. We evaluate the potential of the proposed system by asking users to assess the ability of connotation in suggesting filmic content able to target their affective requests

    Using Compressed Audio-visual Words for Multi-modal Scene Classification

    Get PDF
    We present a novel approach to scene classification using combined audio signal and video image features and compare this methodology to scene classification results using each modality in isolation. Each modality is represented using summary features, namely Mel-frequency Cepstral Coefficients (audio) and Scale Invariant Feature Transform (SIFT) (video) within a multi-resolution bag-of-features model. Uniquely, we extend the classical bag-of-words approach over both audio and video feature spaces, whereby we introduce the concept of compressive sensing as a novel methodology for multi-modal fusion via audio-visual feature dimensionality reduction. We perform evaluation over a range of environments showing performance that is both comparable to the state of the art (86%, over ten scene classes) and invariant to a ten-fold dimensionality reduction within the audio-visual feature space using our compressive representation approach

    A Connotative Space for Supporting Movie Affective Recommendation

    Get PDF
    The problem of relating media content to users’affective responses is here addressed. Previous work suggests that a direct mapping of audio-visual properties into emotion categories elicited by films is rather difficult, due to the high variability of individual reactions. To reduce the gap between the objective level of video features and the subjective sphere of emotions, we propose to shift the representation towards the connotative properties of movies, in a space inter-subjectively shared among users. Consequently, the connotative space allows to define, relate and compare affective descriptions of film videos on equal footing. An extensive test involving a significant number of users watching famous movie scenes, suggests that the connotative space can be related to affective categories of a single user. We apply this finding to reach high performance in meeting user’s emotional preferences

    Searching, navigating, and recommending movies through emotions: A scoping review

    Get PDF
    Movies offer viewers a broad range of emotional experiences, providing entertainment, and meaning. Following the PRISMA-ScR guidelines, we reviewed the literature on digital systems designed to help users search and browse movie libraries and offer recommendations based on emotional content. Our search yielded 83 eligible documents (published between 2000 and 2021). We identified 22 case studies, 34 empirical studies, 26 proof of concept, and one theoretical paper. User transactions (e.g., ratings, tags) were the preferred source of information. The documents examined approached emotions from both categorical (n=35) and dimensional (n=18) perspectives, and nine documents offer a combination of both approaches. Although there are several authors mentioned, the references used are frequently dated, and 12 documents do not mention the author or the model used. We identified 61 words related to emotion or affect. Documents presented on average 1.36 positive terms and 2.64 negative terms. Sentiment analysis () is frequently used for emotion identification, followed by subjective evaluations (n= 15), movie low-level audio and visual features (n = 11), and face recognition technologies (n = 8). We discuss limitations and offer a brief review of current emotion models and research.info:eu-repo/semantics/publishedVersio

    Coping with Data Scarcity in Deep Learning and Applications for Social Good

    Get PDF
    The recent years are experiencing an extremely fast evolution of the Computer Vision and Machine Learning fields: several application domains benefit from the newly developed technologies and industries are investing a growing amount of money in Artificial Intelligence. Convolutional Neural Networks and Deep Learning substantially contributed to the rise and the diffusion of AI-based solutions, creating the potential for many disruptive new businesses. The effectiveness of Deep Learning models is grounded by the availability of a huge amount of training data. Unfortunately, data collection and labeling is an extremely expensive task in terms of both time and costs; moreover, it frequently requires the collaboration of domain experts. In the first part of the thesis, I will investigate some methods for reducing the cost of data acquisition for Deep Learning applications in the relatively constrained industrial scenarios related to visual inspection. I will primarily assess the effectiveness of Deep Neural Networks in comparison with several classical Machine Learning algorithms requiring a smaller amount of data to be trained. Hereafter, I will introduce a hardware-based data augmentation approach, which leads to a considerable performance boost taking advantage of a novel illumination setup designed for this purpose. Finally, I will investigate the situation in which acquiring a sufficient number of training samples is not possible, in particular the most extreme situation: zero-shot learning (ZSL), which is the problem of multi-class classification when no training data is available for some of the classes. Visual features designed for image classification and trained offline have been shown to be useful for ZSL to generalize towards classes not seen during training. Nevertheless, I will show that recognition performances on unseen classes can be sharply improved by learning ad hoc semantic embedding (the pre-defined list of present and absent attributes that represent a class) and visual features, to increase the correlation between the two geometrical spaces and ease the metric learning process for ZSL. In the second part of the thesis, I will present some successful applications of state-of-the- art Computer Vision, Data Analysis and Artificial Intelligence methods. I will illustrate some solutions developed during the 2020 Coronavirus Pandemic for controlling the disease vii evolution and for reducing virus spreading. I will describe the first publicly available dataset for the analysis of face-touching behavior that we annotated and distributed, and I will illustrate an extensive evaluation of several computer vision methods applied to the produced dataset. Moreover, I will describe the privacy-preserving solution we developed for estimating the \u201cSocial Distance\u201d and its violations, given a single uncalibrated image in unconstrained scenarios. I will conclude the thesis with a Computer Vision solution developed in collaboration with the Egyptian Museum of Turin for digitally unwrapping mummies analyzing their CT scan, to support the archaeologists during mummy analysis and avoiding the devastating and irreversible process of physically unwrapping the bandages for removing amulets and jewels from the body

    Conflicts, integration, hybridization of subcultures: An ecological approach to the case of queercore

    Get PDF
    This paper investigates the case study of queercore, providing a socio-historical analysis of its subcultural production, in the terms of what Michel Foucault has called archaeology of knowledge (1969). In particular, we will focus on: the self-definition of the movement; the conflicts between the two merged worlds of punk and queer culture; the \u201cinternal-subcultural\u201d conflicts between both queercore and punk, and between queercore and gay\lesbian music culture; the political aspects of differentiation. In the conclusion, we will offer an innovative theoretical proposal about the interpretation of subcultures in ecological and semiotic terms, combining the contribution of the American sociologist Andrew Abbot and of the Russian semiologist Jurij Michajlovi\u10d Lotma
    • …
    corecore