31,025 research outputs found

    Some Context-free Languages and Their Associated Dynamical Systems

    Get PDF
    This paper focuses on certain context-free dynamical systems within the framework of symbolic dynamics and formal language theory. Our main results include using a block counting method to calculate the entropy of the Dyck languages, applying the Chomsky-Schutzenberger theorem to the Łukasiewicz language, discovering the structure of winning strategies for a combinatorial game involving the Dyck languages, and showing how to construct positive entropy minimal subshifts whose winning strategies are worth studying. These main results are supplemented with an overview of some features of formal languages and symbolic dynamics.Bachelor of Scienc

    Computation of moments for probabilistic finite-state automata

    Full text link
    [EN] The computation of moments of probabilistic finite-state automata (PFA) is researched in this article. First, the computation of moments of the length of the paths is introduced for general PFA, and then, the computation of moments of the number of times that a symbol appears in the strings generated by the PFA is described. These computations require a matrix inversion. Acyclic PFA, such as word graphs, are quite common in many practical applications. Algorithms for the efficient computation of the moments for acyclic PFA are also presented in this paper.This work has been partially supported by the Ministerio de Ciencia y Tecnologia under the grant TIN2017-91452-EXP (IBEM), by the Generalitat Valenciana under the grant PROMETE0/2019/121 (DeepPattern), and by the grant "Ayudas Fundacion BBVA a equipos de investigacion cientifica 2018" (PR[8]_HUM_C2_0087).Sánchez Peiró, JA.; Romero, V. (2020). Computation of moments for probabilistic finite-state automata. Information Sciences. 516:388-400. https://doi.org/10.1016/j.ins.2019.12.052S388400516Sakakibara, Y., Brown, M., Hughey, R., Mian, I. S., Sjölander, K., Underwood, R. C., & Haussler, D. (1994). Stochastic context-free grammers for tRNA modeling. Nucleic Acids Research, 22(23), 5112-5120. doi:10.1093/nar/22.23.5112Álvaro, F., Sánchez, J.-A., & Benedí, J.-M. (2016). An integrated grammar-based approach for mathematical expression recognition. Pattern Recognition, 51, 135-147. doi:10.1016/j.patcog.2015.09.013Mohri, M., Pereira, F., & Riley, M. (2002). Weighted finite-state transducers in speech recognition. Computer Speech & Language, 16(1), 69-88. doi:10.1006/csla.2001.0184Casacuberta, F., & Vidal, E. (2004). Machine Translation with Inferred Stochastic Finite-State Transducers. Computational Linguistics, 30(2), 205-225. doi:10.1162/089120104323093294Ortmanns, S., Ney, H., & Aubert, X. (1997). A word graph algorithm for large vocabulary continuous speech recognition. Computer Speech & Language, 11(1), 43-72. doi:10.1006/csla.1996.0022Soule, S. (1974). Entropies of probabilistic grammars. Information and Control, 25(1), 57-74. doi:10.1016/s0019-9958(74)90799-2Justesen, J., & Larsen, K. J. (1975). On probabilistic context-free grammars that achieve capacity. Information and Control, 29(3), 268-285. doi:10.1016/s0019-9958(75)90437-4Hernando, D., Crespi, V., & Cybenko, G. (2005). Efficient Computation of the Hidden Markov Model Entropy for a Given Observation Sequence. IEEE Transactions on Information Theory, 51(7), 2681-2685. doi:10.1109/tit.2005.850223Nederhof, M.-J., & Satta, G. (2008). Computation of distances for regular and context-free probabilistic languages. Theoretical Computer Science, 395(2-3), 235-254. doi:10.1016/j.tcs.2008.01.010CORTES, C., MOHRI, M., RASTOGI, A., & RILEY, M. (2008). ON THE COMPUTATION OF THE RELATIVE ENTROPY OF PROBABILISTIC AUTOMATA. International Journal of Foundations of Computer Science, 19(01), 219-242. doi:10.1142/s0129054108005644Ilic, V. M., Stankovi, M. S., & Todorovic, B. T. (2011). Entropy Message Passing. IEEE Transactions on Information Theory, 57(1), 375-380. doi:10.1109/tit.2010.2090235Booth, T. L., & Thompson, R. A. (1973). Applying Probability Measures to Abstract Languages. IEEE Transactions on Computers, C-22(5), 442-450. doi:10.1109/t-c.1973.223746Thompson, R. A. (1974). Determination of Probabilistic Grammars for Functionally Specified Probability-Measure Languages. IEEE Transactions on Computers, C-23(6), 603-614. doi:10.1109/t-c.1974.224001Wetherell, C. S. (1980). Probabilistic Languages: A Review and Some Open Questions. ACM Computing Surveys, 12(4), 361-379. doi:10.1145/356827.356829Sanchez, J.-A., & Benedi, J.-M. (1997). Consistency of stochastic context-free grammars from probabilistic estimation based on growth transformations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(9), 1052-1055. doi:10.1109/34.615455Hutchins, S. E. (1972). Moments of string and derivation lengths of stochastic context-free grammars. Information Sciences, 4(2), 179-191. doi:10.1016/0020-0255(72)90011-4Heim, A., Sidorenko, V., & Sorger, U. (2008). Computation of distributions and their moments in the trellis. Advances in Mathematics of Communications, 2(4), 373-391. doi:10.3934/amc.2008.2.373Vidal, E., Thollard, F., de la Higuera, C., Casacuberta, F., & Carrasco, R. C. (2005). Probabilistic finite-state machines - part I. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7), 1013-1025. doi:10.1109/tpami.2005.147Sánchez, J. A., Rocha, M. A., Romero, V., & Villegas, M. (2018). On the Derivational Entropy of Left-to-Right Probabilistic Finite-State Automata and Hidden Markov Models. Computational Linguistics, 44(1), 17-37. doi:10.1162/coli_a_0030

    Computation of distances for regular and context-free probabilistic languages

    Get PDF
    Several mathematical distances between probabilistic languages have been investigated in the literature, motivated by applications in language modeling, computational biology, syntactic pattern matching and machine learning. In most cases, only pairs of probabilistic regular languages were considered. In this paper we extend the previous results to pairs of languages generated by a probabilistic context-free grammar and a probabilistic finite automaton.PostprintPeer reviewe

    Entropy sensitivity of languages defined by infinite automata, via Markov chains with forbidden transitions

    Get PDF
    A language L over a finite alphabet is growth-sensitive (or entropy sensitive) if forbidding any set of subwords F yields a sub-language L^F whose exponential growth rate (entropy) is smaller than that of L. Let (X, E, l) be an infinite, oriented, labelled graph. Considering the graph as an (infinite) automaton, we associate with any pair of vertices x,y in X the language consisting of all words that can be read as the labels along some path from x to y. Under suitable, general assumptions we prove that these languages are growth-sensitive. This is based on using Markov chains with forbidden transitions.Comment: to appear in Theoretical Computer Science, 201

    Complexity of Two-Dimensional Patterns

    Full text link
    In dynamical systems such as cellular automata and iterated maps, it is often useful to look at a language or set of symbol sequences produced by the system. There are well-established classification schemes, such as the Chomsky hierarchy, with which we can measure the complexity of these sets of sequences, and thus the complexity of the systems which produce them. In this paper, we look at the first few levels of a hierarchy of complexity for two-or-more-dimensional patterns. We show that several definitions of ``regular language'' or ``local rule'' that are equivalent in d=1 lead to distinct classes in d >= 2. We explore the closure properties and computational complexity of these classes, including undecidability and L-, NL- and NP-completeness results. We apply these classes to cellular automata, in particular to their sets of fixed and periodic points, finite-time images, and limit sets. We show that it is undecidable whether a CA in d >= 2 has a periodic point of a given period, and that certain ``local lattice languages'' are not finite-time images or limit sets of any CA. We also show that the entropy of a d-dimensional CA's finite-time image cannot decrease faster than t^{-d} unless it maps every initial condition to a single homogeneous state.Comment: To appear in J. Stat. Phy

    Coding-theorem Like Behaviour and Emergence of the Universal Distribution from Resource-bounded Algorithmic Probability

    Full text link
    Previously referred to as `miraculous' in the scientific literature because of its powerful properties and its wide application as optimal solution to the problem of induction/inference, (approximations to) Algorithmic Probability (AP) and the associated Universal Distribution are (or should be) of the greatest importance in science. Here we investigate the emergence, the rates of emergence and convergence, and the Coding-theorem like behaviour of AP in Turing-subuniversal models of computation. We investigate empirical distributions of computing models in the Chomsky hierarchy. We introduce measures of algorithmic probability and algorithmic complexity based upon resource-bounded computation, in contrast to previously thoroughly investigated distributions produced from the output distribution of Turing machines. This approach allows for numerical approximations to algorithmic (Kolmogorov-Chaitin) complexity-based estimations at each of the levels of a computational hierarchy. We demonstrate that all these estimations are correlated in rank and that they converge both in rank and values as a function of computational power, despite fundamental differences between computational models. In the context of natural processes that operate below the Turing universal level because of finite resources and physical degradation, the investigation of natural biases stemming from algorithmic rules may shed light on the distribution of outcomes. We show that up to 60\% of the simplicity/complexity bias in distributions produced even by the weakest of the computational models can be accounted for by Algorithmic Probability in its approximation to the Universal Distribution.Comment: 27 pages main text, 39 pages including supplement. Online complexity calculator: http://complexitycalculator.com
    corecore