30,051 research outputs found

    On the Derivational Entropy of Left-to-Right Probabilistic Finite-State Automata and Hidden Markov Models

    Full text link
    [EN] Probabilistic finite-state automata are a formalism that is widely used in many problems of automatic speech recognition and natural language processing. Probabilistic finite-state automata are closely related to other finite-state models as weighted finite-state automata, word lattices, and hidden Markov models. Therefore, they share many similar properties and problems. Entropy measures of finite-state models have been investigated in the past in order to study the information capacity of these models. The derivational entropy quantifies the uncertainty that the model has about the probability distribution it represents. The derivational entropy in a finite-state automaton is computed from the probability that is accumulated in all of its individual state sequences. The computation of the entropy from a weighted finite-state automaton requires a normalized model. This article studies an efficient computation of the derivational entropy of left-to-right probabilistic finite-state automata, and it introduces an efficient algorithm for normalizing weighted finite-state automata. The efficient computation of the derivational entropy is also extended to continuous hidden Markov models.This work has been partially supported through the European Union's H2020 grant READ (Recognition and Enrichment of Archival Documents) (Ref: 674943) and the MINECO/FEDER-UE project TIN2015-70924-C2-1-R. The second author was supported by the "Division de Estudios de Posgrado e Investigacion" of Instituto Tecnologico de Leon.Sánchez Peiró, JA.; Rocha, MA.; Romero, V.; Villegas, M. (2018). On the Derivational Entropy of Left-to-Right Probabilistic Finite-State Automata and Hidden Markov Models. Computational Linguistics. 44(1):17-37. https://doi.org/10.1162/COLI_a_00306S1737441Abney, S., McAllester, D., & Pereira, F. (1999). Relating probabilistic grammars and automata. Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics -. doi:10.3115/1034678.1034759Bakis, R. (1976). Continuous speech recognition via centisecond acoustic states. The Journal of the Acoustical Society of America, 59(S1), S97-S97. doi:10.1121/1.2003011Can, D., & Saraclar, M. (2011). Lattice Indexing for Spoken Term Detection. IEEE Transactions on Audio, Speech, and Language Processing, 19(8), 2338-2347. doi:10.1109/tasl.2011.2134087Chi, Z. 1999. Statistical properties of probabilistic context-free grammar. Computational Linguistics, 25(1):131–160.Corazza, A., & Satta, G. (2007). Probabilistic Context-Free Grammars Estimated from Infinite Distributions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(8), 1379-1393. doi:10.1109/tpami.2007.1065Dupont, P., Denis, F., & Esposito, Y. (2005). Links between probabilistic automata and hidden Markov models: probability distributions, learning models and induction algorithms. Pattern Recognition, 38(9), 1349-1371. doi:10.1016/j.patcog.2004.03.020Hernando, D., Crespi, V., & Cybenko, G. (2005). Efficient Computation of the Hidden Markov Model Entropy for a Given Observation Sequence. IEEE Transactions on Information Theory, 51(7), 2681-2685. doi:10.1109/tit.2005.850223Huber, M. F., T. Bailey, H. Durrant-Whyte, and U. D. Hanebeck. 2008. On entropy approximation for Gaussian mixture random vectors. In IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pages 181–188, Seoul.Ilic, V. M. 2011. Entropy semiring forward-backward algorithm for HMM entropy computation. CoRR., abs/1108.0347.Kemp, T. and T. Schaaf. 1997. Estimating confidence using word lattices. Eurospeech, pages 827–830, Rhodes.Mann, G. S. and A. McCallum. 2007. Efficient computation of entropy gradient for semi-supervised conditional random fields. In Proceedings of HLT-NAACL, Companion Volume, Short Papers, pages 109–112.Mohri, M., Pereira, F., & Riley, M. (2002). Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1), 69-88. doi:10.1006/csla.2001.0184Nederhof, M.-J., & Satta, G. (2008). Computation of distances for regular and context-free probabilistic languages. Theoretical Computer Science, 395(2-3), 235-254. doi:10.1016/j.tcs.2008.01.010Puigcerver, J., A. H. Toselli, and E. Vidal. 2014. Word-graph and character-lattice combination for KWS in handwritten documents. In International Conference on Frontiers in Handwriting Recognition (ICFHR), pages 181–186, Crete.Sanchis, A., A. Juan, and E. Vidal. 2012. A word-based naïve Bayes classifier for confidence estimation in speech recognition. IEEE Transactions on Audio, Speech, and Language Processing, 20(2):565–574.Soule, S. (1974). Entropies of probabilistic grammars. Information and Control, 25(1), 57-74. doi:10.1016/s0019-9958(74)90799-2Thompson, R. A. (1974). Determination of Probabilistic Grammars for Functionally Specified Probability-Measure Languages. IEEE Transactions on Computers, C-23(6), 603-614. doi:10.1109/t-c.1974.224001Tomita, M. 1986. An efficient word lattice parsing algorithm for continuous speech recognition. In Proceedings of ICASSP, pages 1569–1572, Tokyo.Ueffing, N., F. J. Och, and H. Ney. 2002. Generation of word graphs in statistical machine translation. In Proceedings on Empirical Method for Natural Language Processing, pages 156–163, Philadelphia, PA.Vidal, E., Thollard, F., de la Higuera, C., Casacuberta, F., & Carrasco, R. C. (2005). Probabilistic finite-state machines - part I. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7), 1013-1025. doi:10.1109/tpami.2005.147Wessel, F., Schluter, R., Macherey, K., & Ney, H. (2001). Confidence measures for large vocabulary continuous speech recognition. IEEE Transactions on Speech and Audio Processing, 9(3), 288-298. doi:10.1109/89.906002Wetherell, C. S. (1980). Probabilistic Languages: A Review and Some Open Questions. ACM Computing Surveys, 12(4), 361-379. doi:10.1145/356827.35682

    Summarization of Spanish Talk Shows with Siamese Hierarchical Attention Networks

    Full text link
    [EN] In this paper, we present an approach to Spanish talk shows summarization. Our approach is based on the use of Siamese Neural Networks on the transcription of the show audios. Specifically, we propose to use Hierarchical Attention Networks to select the most relevant sentences for each speaker about a given topic in the show, in order to summarize his opinion about the topic. We train these networks in a siamese way to determine whether a summary is appropriate or not. Previous evaluation of this approach on summarization task of English newspapers achieved performances similar to other state-of-the-art systems. In the absence of enough transcribed or recognized speech data to train our system for talk show summarization in Spanish, we acquire a large corpus of document-summary pairs from Spanish newspapers and we use it to train our system. We choose this newspapers domain due to its high similarity with the topics addressed in talk shows. A preliminary evaluation of our summarization system on Spanish TV programs shows the adequacy of the proposal.This work has been partially supported by the Spanish MINECO and FEDER founds under project AMIC (TIN2017-85854-C4-2-R). Work of Jose-Angel Gonzalez is financed by Universitat Politecnica de Valencia under grant PAID-01-17.González-Barba, JÁ.; Hurtado Oliver, LF.; Segarra Soriano, E.; García-Granada, F.; Sanchís Arnal, E. (2019). Summarization of Spanish Talk Shows with Siamese Hierarchical Attention Networks. Applied Sciences. 9(18):1-13. https://doi.org/10.3390/app9183836S113918Carbonell, J., & Goldstein, J. (1998). The use of MMR, diversity-based reranking for reordering documents and producing summaries. Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval - SIGIR ’98. doi:10.1145/290941.291025Erkan, G., & Radev, D. R. (2004). LexRank: Graph-based Lexical Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research, 22, 457-479. doi:10.1613/jair.1523Lloret, E., & Palomar, M. (2011). Text summarisation in progress: a literature review. Artificial Intelligence Review, 37(1), 1-41. doi:10.1007/s10462-011-9216-zSee, A., Liu, P. J., & Manning, C. D. (2017). Get To The Point: Summarization with Pointer-Generator Networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). doi:10.18653/v1/p17-1099Narayan, S., Cohen, S. B., & Lapata, M. (2018). Ranking Sentences for Extractive Summarization with Reinforcement Learning. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). doi:10.18653/v1/n18-1158González, J.-Á., Segarra, E., García-Granada, F., Sanchis, E., & Hurtado, L.-F. (2019). Siamese hierarchical attention networks for extractive summarization. Journal of Intelligent & Fuzzy Systems, 36(5), 4599-4607. doi:10.3233/jifs-179011Furui, S., Kikuchi, T., Shinnaka, Y., & Hori, C. (2004). Speech-to-Text and Speech-to-Speech Summarization of Spontaneous Speech. IEEE Transactions on Speech and Audio Processing, 12(4), 401-408. doi:10.1109/tsa.2004.828699Shih-Hung Liu, Kuan-Yu Chen, Chen, B., Hsin-Min Wang, Hsu-Chun Yen, & Wen-Lian Hsu. (2015). Combining Relevance Language Modeling and Clarity Measure for Extractive Speech Summarization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(6), 957-969. doi:10.1109/taslp.2015.2414820Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., & Hovy, E. (2016). Hierarchical Attention Networks for Document Classification. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. doi:10.18653/v1/n16-1174Conneau, A., Kiela, D., Schwenk, H., Barrault, L., & Bordes, A. (2017). Supervised Learning of Universal Sentence Representations from Natural Language Inference Data. Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. doi:10.18653/v1/d17-1070Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., & Harshman, R. (1990). Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6), 391-407. doi:10.1002/(sici)1097-4571(199009)41:63.0.co;2-

    Sparsity and cosparsity for audio declipping: a flexible non-convex approach

    Get PDF
    This work investigates the empirical performance of the sparse synthesis versus sparse analysis regularization for the ill-posed inverse problem of audio declipping. We develop a versatile non-convex heuristics which can be readily used with both data models. Based on this algorithm, we report that, in most cases, the two models perform almost similarly in terms of signal enhancement. However, the analysis version is shown to be amenable for real time audio processing, when certain analysis operators are considered. Both versions outperform state-of-the-art methods in the field, especially for the severely saturated signals
    corecore