28,267 research outputs found

    HARC-New Hybrid Method with Hierarchical Attention Based Bidirectional Recurrent Neural Network with Dilated Convolutional Neural Network to Recognize Multilabel Emotions from Text

    Get PDF
    We present a modern hybrid paradigm for managing tacit semantic awareness and qualitative meaning in short texts. The main goals of this proposed technique are to use deep learning approaches to identify multilevel textual sentiment with far less time and more accurate and simple network structure training for better performance. In this analysis, the proposed new hybrid deep learning HARC model architecture for the recognition of multilevel textual sentiment that combines hierarchical attention with Convolutional Neural Network (CNN), Bidirectional Gated Recurrent Unit (BiGRU), and Bidirectional Long Short-Term Memory (BiLSTM) outperforms other compared approaches. BiGRU and BiLSTM were used in this model to eliminate individual context functions and to adequately manage long-range features. Dilated CNN was used to replicate the retrieved feature by forwarding vector instances for better support in the hierarchical attention layer, and it was used to eliminate better text information using higher coupling correlations. Our method handles the most important features to recover the limitations of handling context and semantics sufficiently. On a variety of datasets, our proposed HARC algorithm solution outperformed traditional machine learning approaches as well as comparable deep learning models by a margin of 1%. The accuracy of the proposed HARC method was 82.50 percent IMDB, 98.00 percent for toxic data, 92.31 percent for Cornflower, and 94.60 percent for Emotion recognition data. Our method works better than other basic and CNN and RNN based hybrid models. In the future, we will work for more levels of text emotions from long and more complex text

    Transfer Learning for Speech and Language Processing

    Full text link
    Transfer learning is a vital technique that generalizes models trained for one setting or task to other settings or tasks. For example in speech recognition, an acoustic model trained for one language can be used to recognize speech in another language, with little or no re-training data. Transfer learning is closely related to multi-task learning (cross-lingual vs. multilingual), and is traditionally studied in the name of `model adaptation'. Recent advance in deep learning shows that transfer learning becomes much easier and more effective with high-level abstract features learned by deep models, and the `transfer' can be conducted not only between data distributions and data types, but also between model structures (e.g., shallow nets and deep nets) or even model types (e.g., Bayesian models and neural models). This review paper summarizes some recent prominent research towards this direction, particularly for speech and language processing. We also report some results from our group and highlight the potential of this very interesting research field.Comment: 13 pages, APSIPA 201
    • …
    corecore