524 research outputs found

    A Lexicalized Tree-Adjoining Grammar for Vietnamese

    Get PDF
    In this paper, we present the first sizable grammar built for Vietnamese using LTAG, developed over the past two years, named vnLTAG. This grammar aims at modelling written language and is general enough to be both application- and domain-independent. It can be used for the morpho-syntactic tagging and syntactic parsing of Vietnamese texts, as well as text generation. We then present a robust parsing scheme using vnLTAG and a parser for the grammar. We finish with an evaluation using a test suite

    Applications of Evolutionary Algorithms in Formal Languages

    Get PDF
    Starting from the model proposed by means of Grammatical Evolution, we extend the applicability of the parallel and cooperative searching processes of Evolutionary Algorithms to a new topic: Tree Adjoining Grammar parsing. We evolved derived trees using a string-tree-representation.We also used a linear matching function to compare the yield of a derived tree with a given input. The running tests presented several encouraging results. A post running analysis allowed us to propose several research directions for extending the currently known computational mechanisms in the mildly context sensitive class of languages

    A Lexicalized Tree-Adjoining Grammar for Vietnamese

    No full text
    In this paper, we present the first sizable grammar built for Vietnamese using LTAG, developed over the past two years, named vnLTAG. This grammar aims at modelling written language and is general enough to be both application- and domain-independent. It can be used for the morpho-syntactic tagging and syntactic parsing of Vietnamese texts, as well as text generation. We then present a robust parsing scheme using vnLTAG and a parser for the grammar. We finish with an evaluation using a test suite

    Building factorized TAGs with meta-grammars

    Get PDF
    International audienceHighly compacted TAGs may be built by allowing subtree factorization operators within the elementary trees. While hand-crafting such trees remains possible, a better option arises from a coupling with meta-grammar descriptions. The approach has been validated by the development of FRMG, a wide-coverage French TAG of only 207 trees

    Constraints and Language

    No full text
    More information on the Publisher's webpage: http://www.cambridgescholars.com/constraints-and-languageInternational audienceThe concept of "constraint" is widely used in linguistics, computer science, and psychology. However, its implementation varies widely depending on the research domain: namely, language description, knowledge representation, cognitive modelling, and problem solving. These various uses of constraints offer complementary views on intelligent mechanisms. For example, in-depth descriptions implementing constraints are used in linguistics to filter out syntactic or discursive structures by means of dedicated description languages and constraint ranking. In computer science, the constraint programming paradigm views constraints as a whole, which can be used, for example, to build specific structures. Finally, in psycholinguistics, experiments are carried out to investigate the role of constraints within cognitive processes (both in comprehension and production), with various applications such as dialog modelling for people with disabilities. In this context, Constraints and Language builds an extended overview of the use of constraints to model and process language

    CLiFF Notes: Research In Natural Language Processing at the University of Pennsylvania

    Get PDF
    CLIFF is the Computational Linguists\u27 Feedback Forum. We are a group of students and faculty who gather once a week to hear a presentation and discuss work currently in progress. The \u27feedback\u27 in the group\u27s name is important: we are interested in sharing ideas, in discussing ongoing research, and in bringing together work done by the students and faculty in Computer Science and other departments. However, there are only so many presentations which we can have in a year. We felt that it would be beneficial to have a report which would have, in one place, short descriptions of the work in Natural Language Processing at the University of Pennsylvania. This report then, is a collection of abstracts from both faculty and graduate students, in Computer Science, Psychology and Linguistics. We want to stress the close ties between these groups, as one of the things that we pride ourselves on here at Penn is the communication among different departments and the inter-departmental work. Rather than try to summarize the varied work currently underway at Penn, we suggest reading the abstracts to see how the students and faculty themselves describe their work. The report illustrates the diversity of interests among the researchers here, as well as explaining the areas of common interest. In addition, since it was our intent to put together a document that would be useful both inside and outside of the university, we hope that this report will explain to everyone some of what we are about

    Approximate text generation from non-hierarchical representations in a declarative framework

    Get PDF
    This thesis is on Natural Language Generation. It describes a linguistic realisation system that translates the semantic information encoded in a conceptual graph into an English language sentence. The use of a non-hierarchically structured semantic representation (conceptual graphs) and an approximate matching between semantic structures allows us to investigate a more general version of the sentence generation problem where one is not pre-committed to a choice of the syntactically prominent elements in the initial semantics. We show clearly how the semantic structure is declaratively related to linguistically motivated syntactic representation — we use D-Tree Grammars which stem from work on Tree-Adjoining Grammars. The declarative specification of the mapping between semantics and syntax allows for different processing strategies to be exploited. A number of generation strategies have been considered: a pure topdown strategy and a chart-based generation technique which allows partially successful computations to be reused in other branches of the search space. Having a generator with increased paraphrasing power as a consequence of using non-hierarchical input and approximate matching raises the issue whether certain 'better' paraphrases can be generated before others. We investigate preference-based processing in the context of generation

    Research in the Language, Information and Computation Laboratory of the University of Pennsylvania

    Get PDF
    This report takes its name from the Computational Linguistics Feedback Forum (CLiFF), an informal discussion group for students and faculty. However the scope of the research covered in this report is broader than the title might suggest; this is the yearly report of the LINC Lab, the Language, Information and Computation Laboratory of the University of Pennsylvania. It may at first be hard to see the threads that bind together the work presented here, work by faculty, graduate students and postdocs in the Computer Science and Linguistics Departments, and the Institute for Research in Cognitive Science. It includes prototypical Natural Language fields such as: Combinatorial Categorial Grammars, Tree Adjoining Grammars, syntactic parsing and the syntax-semantics interface; but it extends to statistical methods, plan inference, instruction understanding, intonation, causal reasoning, free word order languages, geometric reasoning, medical informatics, connectionism, and language acquisition. Naturally, this introduction cannot spell out all the connections between these abstracts; we invite you to explore them on your own. In fact, with this issue it’s easier than ever to do so: this document is accessible on the “information superhighway”. Just call up http://www.cis.upenn.edu/~cliff-group/94/cliffnotes.html In addition, you can find many of the papers referenced in the CLiFF Notes on the net. Most can be obtained by following links from the authors’ abstracts in the web version of this report. The abstracts describe the researchers’ many areas of investigation, explain their shared concerns, and present some interesting work in Cognitive Science. We hope its new online format makes the CLiFF Notes a more useful and interesting guide to Computational Linguistics activity at Penn
    • 

    corecore