9,365 research outputs found

    Computation of distances for regular and context-free probabilistic languages

    Get PDF
    Several mathematical distances between probabilistic languages have been investigated in the literature, motivated by applications in language modeling, computational biology, syntactic pattern matching and machine learning. In most cases, only pairs of probabilistic regular languages were considered. In this paper we extend the previous results to pairs of languages generated by a probabilistic context-free grammar and a probabilistic finite automaton.PostprintPeer reviewe

    Computation of moments for probabilistic finite-state automata

    Full text link
    [EN] The computation of moments of probabilistic finite-state automata (PFA) is researched in this article. First, the computation of moments of the length of the paths is introduced for general PFA, and then, the computation of moments of the number of times that a symbol appears in the strings generated by the PFA is described. These computations require a matrix inversion. Acyclic PFA, such as word graphs, are quite common in many practical applications. Algorithms for the efficient computation of the moments for acyclic PFA are also presented in this paper.This work has been partially supported by the Ministerio de Ciencia y Tecnologia under the grant TIN2017-91452-EXP (IBEM), by the Generalitat Valenciana under the grant PROMETE0/2019/121 (DeepPattern), and by the grant "Ayudas Fundacion BBVA a equipos de investigacion cientifica 2018" (PR[8]_HUM_C2_0087).Sánchez Peiró, JA.; Romero, V. (2020). Computation of moments for probabilistic finite-state automata. Information Sciences. 516:388-400. https://doi.org/10.1016/j.ins.2019.12.052S388400516Sakakibara, Y., Brown, M., Hughey, R., Mian, I. S., Sjölander, K., Underwood, R. C., & Haussler, D. (1994). Stochastic context-free grammers for tRNA modeling. Nucleic Acids Research, 22(23), 5112-5120. doi:10.1093/nar/22.23.5112Álvaro, F., Sánchez, J.-A., & Benedí, J.-M. (2016). An integrated grammar-based approach for mathematical expression recognition. Pattern Recognition, 51, 135-147. doi:10.1016/j.patcog.2015.09.013Mohri, M., Pereira, F., & Riley, M. (2002). Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1), 69-88. doi:10.1006/csla.2001.0184Casacuberta, F., & Vidal, E. (2004). Machine Translation with Inferred Stochastic Finite-State Transducers. Computational Linguistics, 30(2), 205-225. doi:10.1162/089120104323093294Ortmanns, S., Ney, H., & Aubert, X. (1997). A word graph algorithm for large vocabulary continuous speech recognition. Computer Speech & Language, 11(1), 43-72. doi:10.1006/csla.1996.0022Soule, S. (1974). Entropies of probabilistic grammars. Information and Control, 25(1), 57-74. doi:10.1016/s0019-9958(74)90799-2Justesen, J., & Larsen, K. J. (1975). On probabilistic context-free grammars that achieve capacity. Information and Control, 29(3), 268-285. doi:10.1016/s0019-9958(75)90437-4Hernando, D., Crespi, V., & Cybenko, G. (2005). Efficient Computation of the Hidden Markov Model Entropy for a Given Observation Sequence. IEEE Transactions on Information Theory, 51(7), 2681-2685. doi:10.1109/tit.2005.850223Nederhof, M.-J., & Satta, G. (2008). Computation of distances for regular and context-free probabilistic languages. Theoretical Computer Science, 395(2-3), 235-254. doi:10.1016/j.tcs.2008.01.010CORTES, C., MOHRI, M., RASTOGI, A., & RILEY, M. (2008). ON THE COMPUTATION OF THE RELATIVE ENTROPY OF PROBABILISTIC AUTOMATA. International Journal of Foundations of Computer Science, 19(01), 219-242. doi:10.1142/s0129054108005644Ilic, V. M., Stankovi, M. S., & Todorovic, B. T. (2011). Entropy Message Passing. IEEE Transactions on Information Theory, 57(1), 375-380. doi:10.1109/tit.2010.2090235Booth, T. L., & Thompson, R. A. (1973). Applying Probability Measures to Abstract Languages. IEEE Transactions on Computers, C-22(5), 442-450. doi:10.1109/t-c.1973.223746Thompson, R. A. (1974). Determination of Probabilistic Grammars for Functionally Specified Probability-Measure Languages. IEEE Transactions on Computers, C-23(6), 603-614. doi:10.1109/t-c.1974.224001Wetherell, C. S. (1980). Probabilistic Languages: A Review and Some Open Questions. ACM Computing Surveys, 12(4), 361-379. doi:10.1145/356827.356829Sanchez, J.-A., & Benedi, J.-M. (1997). Consistency of stochastic context-free grammars from probabilistic estimation based on growth transformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(9), 1052-1055. doi:10.1109/34.615455Hutchins, S. E. (1972). Moments of string and derivation lengths of stochastic context-free grammars. Information Sciences, 4(2), 179-191. doi:10.1016/0020-0255(72)90011-4Heim, A., Sidorenko, V., & Sorger, U. (2008). Computation of distributions and their moments in the trellis. Advances in Mathematics of Communications, 2(4), 373-391. doi:10.3934/amc.2008.2.373Vidal, E., Thollard, F., de la Higuera, C., Casacuberta, F., & Carrasco, R. C. (2005). Probabilistic finite-state machines - part I. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7), 1013-1025. doi:10.1109/tpami.2005.147Sánchez, J. A., Rocha, M. A., Romero, V., & Villegas, M. (2018). On the Derivational Entropy of Left-to-Right Probabilistic Finite-State Automata and Hidden Markov Models. Computational Linguistics, 44(1), 17-37. doi:10.1162/coli_a_0030

    Linear Distances between Markov Chains

    Get PDF
    We introduce a general class of distances (metrics) between Markov chains, which are based on linear behaviour. This class encompasses distances given topologically (such as the total variation distance or trace distance) as well as by temporal logics or automata. We investigate which of the distances can be approximated by observing the systems, i.e. by black-box testing or simulation, and we provide both negative and positive results

    kLog: A Language for Logical and Relational Learning with Kernels

    Full text link
    We introduce kLog, a novel approach to statistical relational learning. Unlike standard approaches, kLog does not represent a probability distribution directly. It is rather a language to perform kernel-based learning on expressive logical and relational representations. kLog allows users to specify learning problems declaratively. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programming, and deductive databases. Access by the kernel to the rich representation is mediated by a technique we call graphicalization: the relational representation is first transformed into a graph --- in particular, a grounded entity/relationship diagram. Subsequently, a choice of graph kernel defines the feature space. kLog supports mixed numerical and symbolic data, as well as background knowledge in the form of Prolog or Datalog programs as in inductive logic programming systems. The kLog framework can be applied to tackle the same range of tasks that has made statistical relational learning so popular, including classification, regression, multitask learning, and collective classification. We also report about empirical comparisons, showing that kLog can be either more accurate, or much faster at the same level of accuracy, than Tilde and Alchemy. kLog is GPLv3 licensed and is available at http://klog.dinfo.unifi.it along with tutorials

    Criticality in Formal Languages and Statistical Physics

    Full text link
    We show that the mutual information between two symbols, as a function of the number of symbols between the two, decays exponentially in any probabilistic regular grammar, but can decay like a power law for a context-free grammar. This result about formal languages is closely related to a well-known result in classical statistical mechanics that there are no phase transitions in dimensions fewer than two. It is also related to the emergence of power-law correlations in turbulence and cosmological inflation through recursive generative processes. We elucidate these physics connections and comment on potential applications of our results to machine learning tasks like training artificial recurrent neural networks. Along the way, we introduce a useful quantity which we dub the rational mutual information and discuss generalizations of our claims involving more complicated Bayesian networks.Comment: Replaced to match final published version. Discussion improved, references adde

    Quantitative reactive modeling and verification

    Get PDF
    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness, which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments

    An Algorithm For Building Language Superfamilies Using Swadesh Lists

    Get PDF
    The main contributions of this thesis are the following: i. Developing an algorithm to generate language families and superfamilies given for each input language a Swadesh list represented using the international phonetic alphabet (IPA) notation. ii. The algorithm is novel in using the Levenshtein distance metric on the IPA representation and in the way it measures overall distance between pairs of Swadesh lists. iii. Building a Swadesh list for the author\u27s native Kinyarwanda language because a Swadesh list could not be found even after an extensive search for it. Adviser: Peter Reves

    Stochastic Attribute-Value Grammars

    Full text link
    Probabilistic analogues of regular and context-free grammars are well-known in computational linguistics, and currently the subject of intensive research. To date, however, no satisfactory probabilistic analogue of attribute-value grammars has been proposed: previous attempts have failed to define a correct parameter-estimation algorithm. In the present paper, I define stochastic attribute-value grammars and give a correct algorithm for estimating their parameters. The estimation algorithm is adapted from Della Pietra, Della Pietra, and Lafferty (1995). To estimate model parameters, it is necessary to compute the expectations of certain functions under random fields. In the application discussed by Della Pietra, Della Pietra, and Lafferty (representing English orthographic constraints), Gibbs sampling can be used to estimate the needed expectations. The fact that attribute-value grammars generate constrained languages makes Gibbs sampling inapplicable, but I show how a variant of Gibbs sampling, the Metropolis-Hastings algorithm, can be used instead.Comment: 23 pages, 21 Postscript figures, uses rotate.st

    Cumulative subject index volumes 52-55

    Get PDF
    corecore