9,228 research outputs found

    Temporal Attention-Gated Model for Robust Sequence Classification

    Full text link
    Typical techniques for sequence classification are designed for well-segmented sequences which have been edited to remove noisy or irrelevant parts. Therefore, such methods cannot be easily applied on noisy sequences expected in real-world applications. In this paper, we present the Temporal Attention-Gated Model (TAGM) which integrates ideas from attention models and gated recurrent networks to better deal with noisy or unsegmented sequences. Specifically, we extend the concept of attention model to measure the relevance of each observation (time step) of a sequence. We then use a novel gated recurrent network to learn the hidden representation for the final prediction. An important advantage of our approach is interpretability since the temporal attention weights provide a meaningful value for the salience of each time step in the sequence. We demonstrate the merits of our TAGM approach, both for prediction accuracy and interpretability, on three different tasks: spoken digit recognition, text-based sentiment analysis and visual event recognition.Comment: Accepted by CVPR 201

    Predicting the effects of suspenseful outcome for automatic storytelling

    Get PDF
    Automatic story generation systems usually deliver suspense by including an adverse outcome in the narrative, in the assumption that the adversity will trigger a certain set of emotions that can be categorized as suspenseful. However, existing systems do not implement solutions relying on predictive models of the impact of the outcome on readers. A formulation of the emotional effects of the outcome would allow storytelling systems to perform a better measure of suspense and discriminate among potential outcomes based on the emotional impact. This paper reports on a computational model of the effect of different outcomes on the perceived suspense. A preliminary analysis to identify and evaluate the affective responses to a set of outcomes commonly used in suspense was carried out. Then, a study was run to quantify and compare suspense and affective responses evoked by the set of outcomes. Next, a predictive model relying on the analyzed data was computed, and an evolutionary algorithm for automatically choosing the best outcome was implemented. The system was tested against human subjects' reported suspense and electromyography responses to the addition of the generated outcomes to narrative passages. The results show a high correlation between the predicted impact of the computed outcome and the reported suspense

    Modelos de clasificación binaria de la coloración semántica de textos

    Get PDF
    Introduction: The purpose of the research is to compare different types of recurrent neural network architectures, namely the long short-term memory and gate recurrent node architecture and the convolutional neural network, and to explore their performance on the example of binary text classification. Material and Methods: To achieve this, the research evaluates the performance of these two popular deep-learning approaches on a dataset consisting of film reviews that are marked with both positive and adverse opinions. The real-world dataset was used to train neural network models using software implementations. Results and Discussion: The research focuses on the implementation of a recurrent neural network for the binary classification of a dataset and explores different types of architecture, approaches and hyperparameters to determine the best model to achieve optimal performance. The software implementation allowed evaluating of various quality metrics, which allowed comparing the performance of the proposed approaches. In addition, the research explores various hyperparameters such as learning rate, packet sizes, and regulation methods to determine their impact on model performance. Conclusion: In general, the research provides valuable insights into the performance of neural networks in binary text classification and highlights the importance of careful architecture selection and hyperparameter tuning to achieve optimal performance
    • …
    corecore