176 research outputs found

    Knowledge-based Framework for Intelligent Emotion Recognition in Spontaneous Speech

    Get PDF
    AbstractAutomatic speech emotion recognition plays an important role in intelligent human computer interaction. Identifying emotion in natural, day to day, spontaneous conversational speech is difficult because most often the emotion expressed by the speaker are not necessarily as prominent as in acted speech. In this paper, we propose a novel spontaneous speech emotion recognition framework that makes use of the available knowledge. The framework is motivated by the observation that there is significant disagreement amongst human annotators when they annotate spontaneous speech; the disagreement largely reduces when they are provided with additional knowledge related to the conversation. The proposed framework makes use of the contexts (derived from linguistic contents) and the knowledge regarding the time lapse of the spoken utterances in the context of an audio call to reliably recognize the current emotion of the speaker in spontaneous audio conversations. Our experimental results demonstrate that there is a significant improvement in the performance of spontaneous speech emotion recognition using the proposed framework

    Unsteady CFD Studies for Gust Modeling in Store Separation

    Get PDF
    Aircraft and different store configurations must be certified before a flight. There is a small but finite probability of aircraft being hit by gust wind at the time of separation. Most store separation analyses from airborne platforms do not consider the gust phenomena because of the complexity and inadequate knowledge of its behavior. A dedicated task group was recently created to understand the gust-related phenomena in aircraft safety. Of the various gust cases, vertical gust is most severe and can cause instability leading to store collision. The situation is compounded in a long and heavy store due to its large projected area. No test procedures exist for simulation or practical tests of gust. A study was conducted to identify a test procedure for gust simulation using MIL standard data and Indian conditions. The current paper studies the emergency release condition where a vertical gust is hitting the aircraft to ascertain safe separation. A discrete gust with a 1-cosine shape and specified length and amplitude is imposed at the inflow boundary. The gust is allowed to sweep the computational domain containing the airborne platform and the store. The computed trajectory of the store, the miss distance, and its angular rates in the presence of gust are analysed in this work to study the safe separation of a store from an airborne platform. Simulations are also carried out to determine the effect of gust at the highest dynamic pressure in the flight envelope

    TCS-ILAB -MediaEval 2015: Affective Impact of Movies and Violent Scene Detection

    Get PDF
    ABSTRACT This paper describes the participation of TCS-ILAB in the MediaEval 2015 Affective Impact of Movies Task (includes Violent Scene Detection). We propose to detect the affective impacts and the violent content in the video clips using two different classifications methodologies, i.e. Bayesian Network approach and Artificial Neural Network approach. Experiments with different combinations of features make up for the five run submissions. SYSTEM DESCRIPTION Bayesian network based valence, arousal and violence detection We describe the use of a Bayesian network for the detection task of violence/non-violence, and induced affect. Here, we learn the relationship between different attributes of different types of features using a Bayesian network (BN). Individual attributes such as Colorfulness, Shot length, or Zero Crossing etc. form the nodes of BN. This includes the valence, arousal and violence labels which are treated as categorical attributes. The primary objective of the BN based approach is to discover the cause-effect relationship between different attributes which otherwise is difficult to learn using other learning methods. This analysis helps in gaining the knowledge of internal processes of feature generation with respect to the labels in question, i.e. violence, valence and arousal. In this work, we have used a publicly available Bayesian network learner [1] which gives us the network structure describing dependencies between different attributes. Using the discovered structure, we compute the conditional probabilities for the root and its cause attributes. Further, we perform inferencing for valence, arousal and violence values for new observations using the junction-tree algorithm supported in Dlib-ml library As will be shown later, conditional probability computation is a relatively simple task for a network having few nodes which is the case for image features. However, as the attribute set grows, the number parameters namely, conditional probability tables grow exponentially. Considering that our major focus is on determining the values of violence, valence and arousal values with respect to unknown values Copyright is held by the author/owner(s). MediaEval 2015 Workshop, Sept. 14-15, Wurzen, Germany of different features, we apply the D-separation principle Artificial neural network based valence, arousal and violence detection This section describes the system that uses Aritificial Neural Networks (ANN) for classification. Two different methodologies are employed for the two different subtasks. For both subtasks, the developed systems extract the features from the video shots (including the audio) prior to classification. Feature extraction The proposed system uses different set of features, either from the available feature set (audio, video, and image), which was provided with the MediaEval dataset, or from our own set of extracted audio features. The designed system either uses audio, image, video features separately, or a combination of them. The audio features are extracted using the openSMILE toolkit [4], from the audio extracted from the video shots. openSMILE uses low level descriptors (LLDs), followed by statistical functionals for extracting meaningful and informative set of audio features. The feature set contains the following LLDs: intensity, loudness, 12 MFCC, pitch (F0), voicing probability, F0 envelope, 8 LSF (Line Spectral Frequencies), zero-crossing rate. Delta regression coefficients are computed from these LLDs, and the following functionals are applied to the LLDs and the delta coefficients: maximum and minimum value and respective relative position within input, range, arithmetic mean, two linear regression coefficients and linear and quadratic error, standard deviation, skewness, kurtosis, quartile, and three inter-quartile ranges. openSMILE, in two different configurations, allows extractions of 988 and 384 (which was earlier used for Interspeech 2009 Emotion Challenge [5]) audio features. Both of these are reduced to a lower dimension after feature selection. Classification For classification, we have used an ANN that is trained with the development set samples available for each of those subtask. As data imbalance exists for the violence detection task (only 4.4% samples are violent), for training, we have taken equal number of samples from both the classes
    • …
    corecore