8 research outputs found

    Finite automata with advice tapes

    Full text link
    We define a model of advised computation by finite automata where the advice is provided on a separate tape. We consider several variants of the model where the advice is deterministic or randomized, the input tape head is allowed real-time, one-way, or two-way access, and the automaton is classical or quantum. We prove several separation results among these variants, demonstrate an infinite hierarchy of language classes recognized by automata with increasing advice lengths, and establish the relationships between this and the previously studied ways of providing advice to finite automata.Comment: Corrected typo

    Inkdots as advice for finite automata

    Full text link
    We examine inkdots placed on the input string as a way of providing advice to finite automata, and establish the relations between this model and the previously studied models of advised finite automata. The existence of an infinite hierarchy of classes of languages that can be recognized with the help of increasing numbers of inkdots as advice is shown. The effects of different forms of advice on the succinctness of the advised machines are examined. We also study randomly placed inkdots as advice to probabilistic finite automata, and demonstrate the superiority of this model over its deterministic version. Even very slowly growing amounts of space can become a resource of meaningful use if the underlying advised model is extended with access to secondary memory, while it is famously known that such small amounts of space are not useful for unadvised one-way Turing machines.Comment: 14 page

    Randomization in Non-Uniform Finite Automata

    Get PDF
    The non-uniform version of Turing machines with an extra advice input tape that depends on the length of the input but not the input itself is a well-studied model in complexity theory. We investigate the same notion of non-uniformity in weaker models, namely one-way finite automata. In particular, we are interested in the power of two-sided bounded-error randomization, and how it compares to determinism and non-determinism. We show that for unlimited advice, randomization is strictly stronger than determinism, and strictly weaker than non-determinism. However, when the advice is restricted to polynomial length, the landscape changes: the expressive power of determinism and randomization does not change, but the power of non-determinism is reduced to the extent that it becomes incomparable with randomization

    One-Way Reversible and Quantum Finite Automata with Advice

    Full text link
    We examine the characteristic features of reversible and quantum computations in the presence of supplementary external information, known as advice. In particular, we present a simple, algebraic characterization of languages recognized by one-way reversible finite automata augmented with deterministic advice. With a further elaborate argument, we prove a similar but slightly weaker result for bounded-error one-way quantum finite automata with advice. Immediate applications of those properties lead to containments and separations among various language families when they are assisted by appropriately chosen advice. We further demonstrate the power and limitation of randomized advice and quantum advice when they are given to one-way quantum finite automata.Comment: A4, 10pt, 1 figure, 31 pages. This is a complete version of an extended abstract appeared in the Proceedings of the 6th International Conference on Language and Automata Theory and Applications (LATA 2012), March 5-9, 2012, A Coruna, Spain, Lecture Notes in Computer Science, Springer-Verlag, Vol.7183, pp.526-537, 201

    Handbook of Lexical Functional Grammar

    Get PDF
    Lexical Functional Grammar (LFG) is a nontransformational theory of linguistic structure, first developed in the 1970s by Joan Bresnan and Ronald M. Kaplan, which assumes that language is best described and modeled by parallel structures representing different facets of linguistic organization and information, related by means of functional correspondences. This volume has five parts. Part I, Overview and Introduction, provides an introduction to core syntactic concepts and representations. Part II, Grammatical Phenomena, reviews LFG work on a range of grammatical phenomena or constructions. Part III, Grammatical modules and interfaces, provides an overview of LFG work on semantics, argument structure, prosody, information structure, and morphology. Part IV, Linguistic disciplines, reviews LFG work in the disciplines of historical linguistics, learnability, psycholinguistics, and second language learning. Part V, Formal and computational issues and applications, provides an overview of computational and formal properties of the theory, implementations, and computational work on parsing, translation, grammar induction, and treebanks. Part VI, Language families and regions, reviews LFG work on languages spoken in particular geographical areas or in particular language families. The final section, Comparing LFG with other linguistic theories, discusses LFG work in relation to other theoretical approaches

    Nonconstructive Methods in Automata Theory

    No full text
    Darbā tiek aplÅ«koti daži nekonstruktÄ«vi pierādÄ«jumi automātu teorijā. Tiek definēts tāds jēdziens, ka nekonstruktivitātes daudzums pierādÄ«jumā. Tiek arÄ« aprakstÄ«ts, ko nozÄ«mē, ka automāts pazÄ«st valodu nekonstruktÄ«vi, un tiek izpētÄ«ts, ar kādu nekonstruktivitāti var pazÄ«t valodas galÄ«gi determinēti automāti un TjÅ«ringa maŔīnas. Izmantojot Artina hipotēzi, tiek pierādÄ«ts, ka galÄ«gu varbÅ«tisku automātu izmēra pārākums var bÅ«t supereksponenciāls, salÄ«dzinot ar galÄ«giem determinētiem automātiem. Pēc tam tiek pierādÄ«ts lÄ«dzÄ«gs izmēra pārākums galÄ«giem kvantu automātiem. Darba beigās tiek definētas dažas valodas, tiek aprakstÄ«ti algoritmi, kā automāts var atpazÄ«t Ŕīs valodas nekonstruktÄ«vi, un ar kādu nekonstruktivitāti.The work is devoted to some nonconstructive proofs in automata theory. The notion of the amount of nonconstructivity in nonconstructive proofs is defined. It is also described how an automaton which recognize language nonconstructively looks like, and it is also examined what amount of nonconstructive help deterministic finite automata or Turing machines need to recognize some languages. It is proved using Artin's conjecture that the size advantage of finite probabilistic automata versus finite deterministic automata can be superexponential. Then the similar size advantage is proved for finite quantum automata. At the end of the work some languages are defined and algorithms are described, that is how an automaton can recognize such languages nonconstructively and with what amount of nonconstructivity
    corecore