2,500 research outputs found

    Semi-bracketed contextual grammars

    Get PDF
    Bracketed and fully bracketed contextual grammars were introduced to bring the concept of a tree structure to the strings by associating a pair of parentheses to the adjoined contexts in the derivation. In this paper, we show that these grammars fail to generate all the basic non-context-free languages, thus cannot be a syntactical model for natural languages. To overcome this failure, we introduce a new class of fully bracketed contextual grammars, called the semi-bracketed contextual grammars, where the selectors can also be non-minimally Dyck covered language. We see that the tree structure to the derived strings is still preserved in this variant. when this new grammar is combined with the maximality feature, the generative power of these grammars is increased to the extend of covering the family of context-free languages and some basic non-context-free languages, thus possessing many properties of the so called `MCS formalism'

    Complexity and modeling power of insertion-deletion systems

    Get PDF
    SISTEMAS DE INSERCIÓN Y BORRADO: COMPLEJIDAD Y CAPACIDAD DE MODELADO El objetivo central de la tesis es el estudio de los sistemas de inserción y borrado y su capacidad computacional. Más concretamente, estudiamos algunos modelos de generación de lenguaje que usan operaciones de reescritura de dos cadenas. También consideramos una variante distribuida de los sistemas de inserción y borrado en el sentido de que las reglas se separan entre un número finito de nodos de un grafo. Estos sistemas se denominan sistemas controlados mediante grafo, y aparecen en muchas áreas de la Informática, jugando un papel muy importante en los lenguajes formales, la lingüística y la bio-informática. Estudiamos la decidibilidad/ universalidad de nuestros modelos mediante la variación de los parámetros de tamaño del vector. Concretamente, damos respuesta a la cuestión más importante concerniente a la expresividad de la capacidad computacional: si nuestro modelo es equivalente a una máquina de Turing o no. Abordamos sistemáticamente las cuestiones sobre los tamaños mínimos de los sistemas con y sin control de grafo.COMPLEXITY AND MODELING POWER OF INSERTION-DELETION SYSTEMS The central object of the thesis are insertion-deletion systems and their computational power. More specifically, we study language generating models that use two string rewriting operations: contextual insertion and contextual deletion, and their extensions. We also consider a distributed variant of insertion-deletion systems in the sense that rules are separated among a finite number of nodes of a graph. Such systems are refereed as graph-controlled systems. These systems appear in many areas of Computer Science and they play an important role in formal languages, linguistics, and bio-informatics. We vary the parameters of the vector of size of insertion-deletion systems and we study decidability/universality of obtained models. More precisely, we answer the most important questions regarding the expressiveness of the computational model: whether our model is Turing equivalent or not. We systematically approach the questions about the minimal sizes of the insertiondeletion systems with and without the graph-control

    Strictly Locally Testable and Resources Restricted Control Languages in Tree-Controlled Grammars

    Full text link
    Tree-controlled grammars are context-free grammars where the derivation process is controlled in such a way that every word on a level of the derivation tree must belong to a certain control language. We investigate the generative capacity of such tree-controlled grammars where the control languages are special regular sets, especially strictly locally testable languages or languages restricted by resources of the generation (number of non-terminal symbols or production rules) or acceptance (number of states). Furthermore, the set theoretic inclusion relations of these subregular language families themselves are studied.Comment: In Proceedings AFL 2023, arXiv:2309.0112

    CItyMaker:

    Get PDF
    Due to its complexity, the evolution of cities is something that is difficult to predict and planning new developments for cities is therefore a difficult task. This complexity can be identified on two levels: on a micro level, it emerges from the multiple relations between the many components and actors in cities, whereas on a macro level it stems from the geographical, social and economic relations between cities. However, many of these relations can be measured. The design of plans for cities can only be improved if designers are able to address measurements of some of the relationships between the components of cities during the design process. These measurements are called urban indicators. By calculating such measurements, designers can grasp the meaning of the changes being proposed, not just as simple alternative layouts, but also in terms of the changes in indicators adding a qualitative perception. This thesis presents a method and a set of tools to generate alternative solutions for an urban context. The method proposes the use of a combined set of design patterns encoding typical design moves used by urban designers. The combination of patterns generates different layouts which can be adjusted by manipulating several parameters in relation to updated urban indicators. The patterns were developed from observation of typical urban design procedures, first encoded as discursive grammars and later translated into parametric design patterns. The CItyMaker method and tools allows the designer to compose a design solution from a set of programmatic premises and fine-tune it by pulling parameters whilst checking the changes in urban indicators. These tools improve the designer’s awareness of the consequences of their design moves

    CItyMaker

    Get PDF
    Due to its complexity, the evolution of cities is something that is difficult to predict and planning new developments for cities is therefore a difficult task. This complexity can be identified on two levels: on a micro level, it emerges from the multiple relations between the many components and actors in cities, whereas on a macro level it stems from the geographical, social and economic relations between cities. However, many of these relations can be measured. The design of plans for cities can only be improved if designers are able to address measurements of some of the relationships between the components of cities during the design process. These measurements are called urban indicators. By calculating such measurements, designers can grasp the meaning of the changes being proposed, not just as simple alternative layouts, but also in terms of the changes in indicators adding a qualitative perception. This thesis presents a method and a set of tools to generate alternative solutions for an urban context. The method proposes the use of a combined set of design patterns encoding typical design moves used by urban designers. The combination of patterns generates different layouts which can be adjusted by manipulating several parameters in relation to updated urban indicators. The patterns were developed from observation of typical urban design procedures, first encoded as discursive grammars and later translated into parametric design patterns. The CItyMaker method and tools allows the designer to compose a design solution from a set of programmatic premises and fine-tune it by pulling parameters whilst checking the changes in urban indicators. These tools improve the designer’s awareness of the consequences of their design moves

    Complexity of Lexical Descriptions and its Relevance to Partial Parsing

    Get PDF
    In this dissertation, we have proposed novel methods for robust parsing that integrate the flexibility of linguistically motivated lexical descriptions with the robustness of statistical techniques. Our thesis is that the computation of linguistic structure can be localized if lexical items are associated with rich descriptions (supertags) that impose complex constraints in a local context. However, increasing the complexity of descriptions makes the number of different descriptions for each lexical item much larger and hence increases the local ambiguity for a parser. This local ambiguity can be resolved by using supertag co-occurrence statistics collected from parsed corpora. We have explored these ideas in the context of Lexicalized Tree-Adjoining Grammar (LTAG) framework wherein supertag disambiguation provides a representation that is an almost parse. We have used the disambiguated supertag sequence in conjunction with a lightweight dependency analyzer to compute noun groups, verb groups, dependency linkages and even partial parses. We have shown that a trigram-based supertagger achieves an accuracy of 92.1‰ on Wall Street Journal (WSJ) texts. Furthermore, we have shown that the lightweight dependency analysis on the output of the supertagger identifies 83‰ of the dependency links accurately. We have exploited the representation of supertags with Explanation-Based Learning to improve parsing effciency. In this approach, parsing in limited domains can be modeled as a Finite-State Transduction. We have implemented such a system for the ATIS domain which improves parsing eciency by a factor of 15. We have used the supertagger in a variety of applications to provide lexical descriptions at an appropriate granularity. In an information retrieval application, we show that the supertag based system performs at higher levels of precision compared to a system based on part-of-speech tags. In an information extraction task, supertags are used in specifying extraction patterns. For language modeling applications, we view supertags as syntactically motivated class labels in a class-based language model. The distinction between recursive and non-recursive supertags is exploited in a sentence simplification application

    Logical model of competence and performance in the human sentence processor

    Get PDF
    • …
    corecore