969,675 research outputs found

    Summarization of Spanish Talk Shows with Siamese Hierarchical Attention Networks

    Full text link
    [EN] In this paper, we present an approach to Spanish talk shows summarization. Our approach is based on the use of Siamese Neural Networks on the transcription of the show audios. Specifically, we propose to use Hierarchical Attention Networks to select the most relevant sentences for each speaker about a given topic in the show, in order to summarize his opinion about the topic. We train these networks in a siamese way to determine whether a summary is appropriate or not. Previous evaluation of this approach on summarization task of English newspapers achieved performances similar to other state-of-the-art systems. In the absence of enough transcribed or recognized speech data to train our system for talk show summarization in Spanish, we acquire a large corpus of document-summary pairs from Spanish newspapers and we use it to train our system. We choose this newspapers domain due to its high similarity with the topics addressed in talk shows. A preliminary evaluation of our summarization system on Spanish TV programs shows the adequacy of the proposal.This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R). Work of Jose-Angel Gonzalez is financed by Universitat Politecnica de Valencia under grant PAID-01-17.González-Barba, JÁ.; Hurtado Oliver, LF.; Segarra Soriano, E.; García-Granada, F.; Sanchís Arnal, E. (2019). Summarization of Spanish Talk Shows with Siamese Hierarchical Attention Networks. Applied Sciences. 9(18):1-13. https://doi.org/10.3390/app9183836S113918Carbonell, J., & Goldstein, J. (1998). The use of MMR, diversity-based reranking for reordering documents and producing summaries. Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR ’98. doi:10.1145/290941.291025Erkan, G., & Radev, D. R. (2004). LexRank: Graph-based Lexical Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research, 22, 457-479. doi:10.1613/jair.1523Lloret, E., & Palomar, M. (2011). Text summarisation in progress: a literature review. Artificial Intelligence Review, 37(1), 1-41. doi:10.1007/s10462-011-9216-zSee, A., Liu, P. J., & Manning, C. D. (2017). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p17-1099Narayan, S., Cohen, S. B., & Lapata, M. (2018). Ranking Sentences for Extractive Summarization with Reinforcement Learning. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). doi:10.18653/v1/n18-1158González, J.-Á., Segarra, E., García-Granada, F., Sanchis, E., & Hurtado, L.-F. (2019). Siamese hierarchical attention networks for extractive summarization. Journal of Intelligent & Fuzzy Systems, 36(5), 4599-4607. doi:10.3233/jifs-179011Furui, S., Kikuchi, T., Shinnaka, Y., & Hori, C. (2004). Speech-to-Text and Speech-to-Speech Summarization of Spontaneous Speech. IEEE Transactions on Speech and Audio Processing, 12(4), 401-408. doi:10.1109/tsa.2004.828699Shih-Hung Liu, Kuan-Yu Chen, Chen, B., Hsin-Min Wang, Hsu-Chun Yen, & Wen-Lian Hsu. (2015). Combining Relevance Language Modeling and Clarity Measure for Extractive Speech Summarization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(6), 957-969. doi:10.1109/taslp.2015.2414820Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical Attention Networks for Document Classification. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. doi:10.18653/v1/n16-1174Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/d17-1070Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391-407. doi:10.1002/(sici)1097-4571(199009)41:63.0.co;2-

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    Hypermedia-based discovery for source selection using low-cost linked data interfaces

    Get PDF
    Evaluating federated Linked Data queries requires consulting multiple sources on the Web. Before a client can execute queries, it must discover data sources, and determine which ones are relevant. Federated query execution research focuses on the actual execution, while data source discovery is often marginally discussed-even though it has a strong impact on selecting sources that contribute to the query results. Therefore, the authors introduce a discovery approach for Linked Data interfaces based on hypermedia links and controls, and apply it to federated query execution with Triple Pattern Fragments. In addition, the authors identify quantitative metrics to evaluate this discovery approach. This article describes generic evaluation measures and results for their concrete approach. With low-cost data summaries as seed, interfaces to eight large real-world datasets can discover each other within 7 minutes. Hypermedia-based client-side querying shows a promising gain of up to 50% in execution time, but demands algorithms that visit a higher number of interfaces to improve result completeness

    Relationship between accounting benefits and ERP user satisfaction in the context of the fourth industrial revolution

    Get PDF
    The importance of corporate social responsibility is shaping investment decisions and entrepreneurial actions in diverse perspectives. The rapid growth of SMEs has tremendous impacts on the environment. Nonetheless, the economic emergence plan of Cameroon has prompted government support of SMEs through diverse projects. This saw economic growth increased to 3.8% and unemployment dropped to 4.3% caused by the expansion of private sector investments. The dilemma that necessitated this study is the response strategy of SMEs operators towards environmental sustainability. This study, thus seeks to examine the effects of entrepreneurial intentions and actions on environmental sustainability. The research is a conclusive case study design supported by the philosophical underpins of objectivism ontology and positivism epistemology. Data was sourced from four hundred (400) SMEs operators purposively sampled from the Centre and Littoral regions of Cameroon using structured questionnaire. Data was analysed using the Structural Equation Modelling technique with the aid of statistical packages including: SPSS 24 and AMOS 23. The study revealed that entrepreneurial action has weak positive statistical significant impacts on environmental sustainability; whereas entrepreneurial intention has strong positive statistical significant effects on environmental sustainability. Entrepreneurial intention comprised of self-efficacy and perceived control whereas, entrepreneurial actions involved entrepreneurial alertness and uncertainty. This study concludes that entrepreneurs in Cameroon have sustainable intentions to protect the environment but; the current actions taken are inadequate. This research recommends that entrepreneurs should enhance efforts toward attaining the state of genuine sustainabilit

    Recent advances on recursive filtering and sliding mode design for networked nonlinear stochastic systems: A survey

    Get PDF
    Copyright © 2013 Jun Hu et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.Some recent advances on the recursive filtering and sliding mode design problems for nonlinear stochastic systems with network-induced phenomena are surveyed. The network-induced phenomena under consideration mainly include missing measurements, fading measurements, signal quantization, probabilistic sensor delays, sensor saturations, randomly occurring nonlinearities, and randomly occurring uncertainties. With respect to these network-induced phenomena, the developments on filtering and sliding mode design problems are systematically reviewed. In particular, concerning the network-induced phenomena, some recent results on the recursive filtering for time-varying nonlinear stochastic systems and sliding mode design for time-invariant nonlinear stochastic systems are given, respectively. Finally, conclusions are proposed and some potential future research works are pointed out.This work was supported in part by the National Natural Science Foundation of China under Grant nos. 61134009, 61329301, 61333012, 61374127 and 11301118, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant no. GR/S27658/01, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    Beyond Stemming and Lemmatization: Ultra-stemming to Improve Automatic Text Summarization

    Full text link
    In Automatic Text Summarization, preprocessing is an important phase to reduce the space of textual representation. Classically, stemming and lemmatization have been widely used for normalizing words. However, even using normalization on large texts, the curse of dimensionality can disturb the performance of summarizers. This paper describes a new method for normalization of words to further reduce the space of representation. We propose to reduce each word to its initial letters, as a form of Ultra-stemming. The results show that Ultra-stemming not only preserve the content of summaries produced by this representation, but often the performances of the systems can be dramatically improved. Summaries on trilingual corpora were evaluated automatically with Fresa. Results confirm an increase in the performance, regardless of summarizer system used.Comment: 22 pages, 12 figures, 9 table
    corecore