3,277 research outputs found

    New Versions of Classical Automata and Grammars

    Get PDF
    Tato diplomová práce se zabývá zkoumáním nových verzí automatů a gramatik a je proto rozdělena do dvou částí. První část definuje a studuje čisté více zásobníkové automaty a navíc zavádí úplná uspořádání nad jejich zásobníky nebo zásobníkovými symboly. Práce dokazuje, že zavedená omezení snižují vyjadřovací sílu automatů. Ve druhé části práce jsou definovány a popsány nové derivační módy gramatik s rozptýleným kontextem, které zobecňují relaci přímé derivace. Je dokázáno, že jejich použití nesnižuje vyjadřovací sílu gramatik.This master's thesis investigates new versions of automata and grammars and is thus divided into two parts. First part defines and studies pure multi-pushdown automata and additionally introduces total orders above their pushdowns or pushdown symbols. Present work proves, defined restrictions decrease accepting power of these automata. In the second part, new modes of scattered context derivations are defined and described, which generalize the relation of direct derivation. It is proved, these modes do not decrease the generation power of scattered context grammars.

    Is Parameters Quantification in Genetic Algorithm Important, How to do it?

    Get PDF
    The term “appropriate parameters” signifies the correct choice of values has considerable effect on the performance that directs the search process towards the global optima. The performance typically is measured considering both quality of the results obtained and time requires in finding them. A genetic algorithm is a search and optimization technique, whose performance largely depends on various factors – if not tuned appropriately, difficult to get global optima. This paper describes the applicability of orthogonal array and Taguchi approach in tuning the genetic algorithm parameters. The domain of inquiry is grammatical inference has a wide range of applications. The optimal conditions were obtained corresponding to performance and the quality of results with reduced cost and variability. The primary objective of conducting this study is to identify the appropriate parameter setting by which overall performance and quality of results can be enhanced. In addition, a systematic discussion presented will be helpful for researchers in conducting parameters quantification for other algorithm

    Universal neural field computation

    Full text link
    Turing machines and G\"odel numbers are important pillars of the theory of computation. Thus, any computational architecture needs to show how it could relate to Turing machines and how stable implementations of Turing computation are possible. In this chapter, we implement universal Turing computation in a neural field environment. To this end, we employ the canonical symbologram representation of a Turing machine obtained from a G\"odel encoding of its symbolic repertoire and generalized shifts. The resulting nonlinear dynamical automaton (NDA) is a piecewise affine-linear map acting on the unit square that is partitioned into rectangular domains. Instead of looking at point dynamics in phase space, we then consider functional dynamics of probability distributions functions (p.d.f.s) over phase space. This is generally described by a Frobenius-Perron integral transformation that can be regarded as a neural field equation over the unit square as feature space of a dynamic field theory (DFT). Solving the Frobenius-Perron equation yields that uniform p.d.f.s with rectangular support are mapped onto uniform p.d.f.s with rectangular support, again. We call the resulting representation \emph{dynamic field automaton}.Comment: 21 pages; 6 figures. arXiv admin note: text overlap with arXiv:1204.546

    Proceedings of the Workshop on Linear Logic and Logic Programming

    Get PDF
    Declarative programming languages often fail to effectively address many aspects of control and resource management. Linear logic provides a framework for increasing the strength of declarative programming languages to embrace these aspects. Linear logic has been used to provide new analyses of Prolog\u27s operational semantics, including left-to-right/depth-first search and negation-as-failure. It has also been used to design new logic programming languages for handling concurrency and for viewing program clauses as (possibly) limited resources. Such logic programming languages have proved useful in areas such as databases, object-oriented programming, theorem proving, and natural language parsing. This workshop is intended to bring together researchers involved in all aspects of relating linear logic and logic programming. The proceedings includes two high-level overviews of linear logic, and six contributed papers. Workshop organizers: Jean-Yves Girard (CNRS and University of Paris VII), Dale Miller (chair, University of Pennsylvania, Philadelphia), and Remo Pareschi, (ECRC, Munich)

    Multiple Context-Free Tree Grammars: Lexicalization and Characterization

    Get PDF
    Multiple (simple) context-free tree grammars are investigated, where "simple" means "linear and nondeleting". Every multiple context-free tree grammar that is finitely ambiguous can be lexicalized; i.e., it can be transformed into an equivalent one (generating the same tree language) in which each rule of the grammar contains a lexical symbol. Due to this transformation, the rank of the nonterminals increases at most by 1, and the multiplicity (or fan-out) of the grammar increases at most by the maximal rank of the lexical symbols; in particular, the multiplicity does not increase when all lexical symbols have rank 0. Multiple context-free tree grammars have the same tree generating power as multi-component tree adjoining grammars (provided the latter can use a root-marker). Moreover, every multi-component tree adjoining grammar that is finitely ambiguous can be lexicalized. Multiple context-free tree grammars have the same string generating power as multiple context-free (string) grammars and polynomial time parsing algorithms. A tree language can be generated by a multiple context-free tree grammar if and only if it is the image of a regular tree language under a deterministic finite-copying macro tree transducer. Multiple context-free tree grammars can be used as a synchronous translation device.Comment: 78 pages, 13 figure

    Hypergrammar-based parallel multi-frontal solver for grids with point singularities

    Get PDF
    This paper describes the application of hypergraph grammars to drive linear computationalcost solver for grids with point singularities. Such graph grammar productions are the rstmathematical formalism used to describe solver algorithm and each of them indicates thesmallest atomic task that can be executed in parallel, which is very useful in case of parallelexecution. In particular the partial order of execution of graph grammar productions can befound, and the sets of independent graph grammar productions can be localized. They canbe scheduled set by set into shared memory parallel machine. The graph grammar basedsolver has been implemented with NIVIDIA CUDA for GPU. Graph grammar productionsare accompanied by numerical results for 2D case. We show that our graph grammar basedsolver with GPU accelerator is order of magnitude faster than state of the art MUMPSsolver

    Grammars for generating isiXhosa and isiZulu weather bulletin verbs

    Get PDF
    The Met Office has investigated the use of natural language generation (NLG) technologies to streamline the production of weather forecasts. Their approach would be of great benefit in South Africa because there is no fast and large scale producer, automated or otherwise, of textual weather summaries for Nguni languages. This is because of, among other things, the complexity of Nguni languages. The structure of these languages is very different from Indo-European languages, and therefore we cannot reuse existing technologies that were developed for the latter group. Traditional NLG techniques such as templates are not compatible with 'Bantu' languages, and existing works that document scaled-down 'Bantu' language grammars are also not sufficient to generate weather text. In pursuance of generating weather text in isiXhosa and isiZulu - we restricted our text to only verbs in order to ensure a manageable scope. In particular, we have developed a corpus of weather sentences in order to determine verb features. We then created context free verbal grammar rules using an incremental approach. The quality of these rules was evaluated using two linguists. We then investigated the grammatical similarity of isiZulu verbs with their isiXhosa counterparts, and the extent to which a singular merged set of grammar rules can be used to produce correct verbs for both languages. The similarity analysis of the two languages was done through the developed rules' parse trees, and by applying binary similarity measures on the sets of verbs generated by the rules. The parse trees show that the differences between the verb's components are minor, and the similarity measures indicate that the verb sets are at most 59.5% similar (Driver-Kroeber metric). We also examined the importance of the phonological conditioning process by developing functions that calculate the ratio of verbs that will require conditioning out of the total strings that can be generated. We have found that the phonological conditioning process affects at least 45% of strings for isiXhosa, and at least 67% of strings for isiZulu depending on the type of verb root that is used. Overall, this work shows that the differences between isiXhosa and isiZulu verbs are minor, however, the exploitation of these similarities for the goal of creating a unified rule set for both languages cannot be achieved without significant maintainability compromises because there are dependencies that exist in one language and not the other between the verb's 'modules'. Furthermore, the phonological conditioning process should be implemented in order to improve generated text due to the high ratio of verbs it affects

    Acta Cybernetica : Volume 12. Number 4.

    Get PDF

    Science by Conceptual Analysis: The Genius of the Late Scholastics

    Get PDF
    The late scholastics, from the fourteenth to the seventeenth centuries, contributed to many fields of knowledge other than philosophy. They developed a method of conceptual analysis that was very productive in those disciplines in which theory is relatively more important than empirical results. That includes mathematics, where the scholastics developed the analysis of continuous motion, which fed into the calculus, and the theory of risk and probability. The method came to the fore especially in the social sciences. In legal theory they developed, for example, the ethical analyses of the conditions of validity of contracts, and natural rights theory. In political theory, they introduced constitutionalism and the thought experiment of a “state of nature”. Their contributions to economics included concepts still regarded as basic, such as demand, capital, labour, and scarcity. Faculty psychology and semiotics are other areas of significance. In such disciplines, later developments rely crucially on scholastic concepts and vocabulary

    Random generation of RNA secondary structures according to native distributions

    Get PDF
    Nebel M, Scheid A, Weinberg F. Random generation of RNA secondary structures according to native distributions. Algorithms for Molecular Biology. 2011;6(1): 24
    corecore