16 research outputs found

    Improving a supervised CCG parser

    Get PDF
    The central topic of this thesis is the task of syntactic parsing with Combinatory Categorial Grammar (CCG). We focus on pipeline approaches that have allowed researchers to develop efficient and accurate parsers trained on articles taken from the Wall Street Journal (WSJ). We present three approaches to improving the state-of-the-art in CCG parsing. First, we test novel supertagger-parser combinations to identify the parsing models and algorithms that benefit the most from recent gains in supertagger accuracy. Second, we attempt to lessen the future burdens of assembling a state-of-the-art CCG parsing pipeline by showing that a part-of-speech (POS) tagger is not required to achieve optimal performance. Finally, we discuss the deficiencies of current parsing algorithms and propose a solution that promises improvements in accuracy – particularly for difficult dependencies – while preserving efficiency and optimality guarantees

    CCG-augmented hierarchical phrase-based statistical machine translation

    Get PDF
    Augmenting Statistical Machine Translation (SMT) systems with syntactic information aims at improving translation quality. Hierarchical Phrase-Based (HPB) SMT takes a step toward incorporating syntax in Phrase-Based (PB) SMT by modelling one aspect of language syntax, namely the hierarchical structure of phrases. Syntax Augmented Machine Translation (SAMT) further incorporates syntactic information extracted using context free phrase structure grammar (CF-PSG) in the HPB SMT model. One of the main challenges facing CF-PSG-based augmentation approaches for SMT systems emerges from the difference in the definition of the constituent in CF-PSG and the ‘phrase’ in SMT systems, which hinders the ability of CF-PSG to express the syntactic function of many SMT phrases. Although the SAMT approach to solving this problem using ‘CCG-like’ operators to combine constituent labels improves syntactic constraint coverage, it significantly increases their sparsity, which restricts translation and negatively affects its quality. In this thesis, we address the problems of sparsity and limited coverage of syntactic constraints facing the CF-PSG-based syntax augmentation approaches for HPB SMT using Combinatory Cateogiral Grammar (CCG). We demonstrate that CCG’s flexible structures and rich syntactic descriptors help to extract richer, more expressive and less sparse syntactic constraints with better coverage than CF-PSG, which enables our CCG-augmented HPB system to outperform the SAMT system. We also try to soften the syntactic constraints imposed by CCG category nonterminal labels by extracting less fine-grained CCG-based labels. We demonstrate that CCG label simplification helps to significantly improve the performance of our CCG category HPB system. Finally, we identify the factors which limit the coverage of the syntactic constraints in our CCG-augmented HPB model. We then try to tackle these factors by extending the definition of the nonterminal label to be composed of a sequence of CCG categories and augmenting the glue grammar with CCG combinatory rules. We demonstrate that our extension approaches help to significantly increase the scope of the syntactic constraints applied in our CCG-augmented HPB model and achieve significant improvements over the HPB SMT baseline

    Parsing with automatically acquired, wide-coverage, robust, probabilistic LFG approximations

    Get PDF
    Traditionally, rich, constraint-based grammatical resources have been hand-coded. Scaling such resources beyond toy fragments to unrestricted, real text is knowledge-intensive, timeconsuming and expensive. The work reported in this thesis is part of a larger project to automate as much as possible the construction of wide-coverage, deep, constraint-based grammatical resources from treebanks. The Penn-II treebank is a large collection of parse-annotated newspaper text. We have designed a Lexical-Functional Grammar (LFG) (Kaplan and Bresnan, 1982) f-structure annotation algorithm to automatically annotate this treebank with f-structure information approximating to basic predicate-argument or dependency structures (Cahill et al., 2002c, 2004a). We then use the f-structure-annotated treebank resource to automatically extract grammars and lexical resources for parsing new text into f-structures. We have designed and implemented the Treebank Tool Suite (TTS) to support the linguistic work that seeds the automatic f-structure annotation algorithm (Cahill and van Genabith, 2002) and the F-Structure Annotation Tool (FSAT) to validate and visualise the results of automatic f-structure annotation. We have designed and implemented two PCFG-based probabilistic parsing architectures for parsing unseen text into f-structures: the pipeline and the integrated model. Both architectures parse raw text into basic, but possibly incomplete, predicate-argument structures (“proto f-structures”) with long distance dependencies (LDDs) unresolved (Cahill et al., 2002c). We have designed and implemented a method for automatically resolving LDDs at f-structure level based on a finite approximation of functional uncertainty equations (Kaplan and Zaenen, 1989) automatically acquired from the f structure-annotated treebank resource (Cahill et al., 2004b). To date, the best result achieved by our own Penn-II induced grammars is a dependency f-score of 80.33% against the PARC 700, an improvement of 0.73% over the best handcrafted grammar of (Kaplan et al., 2004). The processing architecture developed in this thesis is highly flexible: using external, state-of-the-art parsing technologies (Charniak, 2000) in our pipeline model, we achieve a dependency f-score of 81.79% against the PARC 700, an improvement of 2.19% over the results reported in Kaplan et al. (2004). We have also ported our grammar induction methodology to German and the TIGER treebank resource (Cahill et al., 2003a). We have developed a method for treebank-based, wide-coverage, deep, constraintbased grammar acquisition. The resulting PCFG-based LFG approximations parse the Penn-II treebank with wider coverage (measured in terms of complete spanning parse) and parsing results comparable to or better than those achieved by the best hand-crafted grammars, with, we believe, considerably less grammar development effort. We believe that our approach successfully addresses the knowledge-acquisition bottleneck (familiar from rule-based approaches to Al and NLP) in wide-coverage, constraint-based grammar development. Our approach can provide an attractive, wide-coverage, multilingual, deep, constraint-based grammar acquisition paradigm

    Learning Multilingual Semantic Parsers for Question Answering over Linked Data. A comparison of neural and probabilistic graphical model architectures

    Get PDF
    Hakimov S. Learning Multilingual Semantic Parsers for Question Answering over Linked Data. A comparison of neural and probabilistic graphical model architectures. Bielefeld: UniversitÀt Bielefeld; 2019.The task of answering natural language questions over structured data has received wide interest in recent years. Structured data in the form of knowledge bases has been available for public usage with coverage on multiple domains. DBpedia and Freebase are such knowledge bases that include encyclopedic data about multiple domains. However, querying such knowledge bases requires an understanding of a query language and the underlying ontology, which requires domain expertise. Querying structured data via question answering systems that understand natural language has gained popularity to bridge the gap between the data and the end user. In order to understand a natural language question, a question answering system needs to map the question into query representation that can be evaluated given a knowledge base. An important aspect that we focus in this thesis is the multilinguality. While most research focused on building monolingual solutions, mainly English, this thesis focuses on building multilingual question answering systems. The main challenge for processing language input is interpreting the meaning of questions in multiple languages. In this thesis, we present three different semantic parsing approaches that learn models to map questions into meaning representations, into a query in particular, in a supervised fashion. Each approach differs in the way the model is learned, the features of the model, the way of representing the meaning and how the meaning of questions is composed. The first approach learns a joint probabilistic model for syntax and semantics simultaneously from the labeled data. The second method learns a factorized probabilistic graphical model that builds on a dependency parse of the input question and predicts the meaning representation that is converted into a query. The last approach presents a number of different neural architectures that tackle the task of question answering in end-to-end fashion. We evaluate each approach using publicly available datasets and compare them with state-of-the-art QA systems

    Executable Attribute Grammars for Modular and Efficient Natural Language Processing

    Get PDF
    Language-processors that are constructed using top-down recursive-descent with backtracking parsing are highly modular, and are easy to implement and maintain. However, a widely-held inaccurate view is that top-down processors are inherently exponential for ambiguous grammars and cannot accommodate left-recursive syntax rules. It has been known that exponential time and space complexities can be avoided by memoization and compact graph-structured representation, and that left- recursive productions can be accommodated through a variety of techniques. However, until now, memoization, compact representation, and techniques for handling left-recursion have either been presented independently, or else attempts at their integration have compromised modularity and correctness of the resulting parses. Specifying syntax and semantics to describe formal languages using denotational notation of attribute grammars (AGs) has been widely practiced. However, very little work has shown the usefulness of declarative AGs for constructing computational models of natural language. Previous top-down approaches fall short in accommodating ambiguous and general CFGs with arbitrary semantics in one pass as executable specifications. Existing approaches lack in providing a declarative syntax-semantics interface that can take full advantages of dependencies between attributes of syntactic constituents to model linguistically-motivated cases. This thesis solves these shortcomings by proposing a new modular top-down syntactic and semantic analysis system, which is efficient and accommodates all forms of CFGs. Moreover, this system provides notation to declaratively specify semantics by establishing arbitrary dependencies between attributes of syntactic categories to perform linguistically-motivated tasks such as: building directly-executable natural-language query processors, computing meanings of sentences using compositional semantics, performing contextual disambiguation tasks, modelling restrictive classes of languages etc

    Designing Service-Oriented Chatbot Systems Using a Construction Grammar-Driven Natural Language Generation System

    Get PDF
    Service oriented chatbot systems are used to inform users in a conversational manner about a particular service or product on a website. Our research shows that current systems are time consuming to build and not very accurate or satisfying to users. We find that natural language understanding and natural language generation methods are central to creating an eïżœfficient and useful system. In this thesis we investigate current and past methods in this research area and place particular emphasis on Construction Grammar and its computational implementation. Our research shows that users have strong emotive reactions to how these systems behave, so we also investigate the human computer interaction component. We present three systems (KIA, John and KIA2), and carry out extensive user tests on all of them, as well as comparative tests. KIA is built using existing methods, John is built with the user in mind and KIA2 is built using the construction grammar method. We found that the construction grammar approach performs well in service oriented chatbots systems, and that users preferred it over other systems

    Learning logic rules from text using statistical methods for natural language processing

    Get PDF
    The field of Natural Language Processing (NLP) examines how computers can be made to do beneficial tasks by understanding the natural language. The foundations of NLP are diverse and include scientific fields such as electrical and electronic engineering, linguistics, and artificial intelligence. Some popular NLP applications are information extraction, machine translation, text summarization, and question answering. This dissertation proposes a new methodology using Answer Set programming (ASP) as our main formalism to predict Interpretable Semantic Textual Similarity (iSTS) with a rule-based approach focusing on hard-coded rules for our system, Inspire. We next propose an intelligent rule learning methodology using Inductive Logic Programming (ILP) and modify the ILP-tool eXtended Hyrbid Abductive Inductive Learning (XHAIL) in order to test if we are able to learn the ASP-based rules that were hard-coded earlier on the chunking subtask of the Inspire system. Chunking is the identification of short phrases such as noun phrases which mainly rely on Part-of-Speech (POS) tags. We next evaluate our results using real data sets obtained from the SemEval2016 Task-2 iSTS competition to work with a real application which could be evaluated objectively using the test-sets provided by experts. The Inspire system participated at the SemEval2016 Task-2 iSTS competition in the subtasks of predicting chunk similarity alignments for gold chunks and system generated chunks for three different Datasets. The Inspire system extended the basic ideas from SemEval2015 iSTS Task participant NeRoSim, by realising the rules in logic programming and obtaining the result with an Answer Set Solver. To prepare the input for the logic program, the PunktTokenizer, Word2Vec, and WordNet APIs of NLTK, and the Part-of-Speech (POS) and Named-Entity-Recognition (NER) taggers from Stanford CoreNLP were used. For the chunking subtask, a joint POS-tagger and dependency parser were used based on which an Answer Set program determined chunks. The Inspire system ranked third place overall and first place in one of the competition datasets in the gold chunk subtask. For the above mentioned system, we decided to automate the sentence chunking process by learning the ASP rules using a statistical logical method which combines rule-based and statistical artificial intelligence methods, namely ILP. ILP has been applied to a variety of NLP problems some of which include parsing, information extraction, and question answering. XHAIL, is the ILP-tool we used that aims at generating a hypothesis, which is a logic program, from given background knowledge and examples of structured knowledge based on information provided by the POS-tags One of the main challenges was to extend the XHAIL algorithm for ILP which is based on ASP. With respect to processing natural language, ILP can cater for the constant change in how language is used on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions XHAIL was extended with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. These improvements were evaluated on the subtask of sentence chunking using the same three datasets obtained from the SemEval2016 Task-2 competition. Results show that these improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art systems on the same task. Moreover, the hypotheses obtained from individual datasets were compared to each other to gain insights on the structure of each dataset. Using ILP to extend our Inspire system not only automates the process of chunking the sentences but also provides us with interpretable models that are useful for providing a deeper understanding of the data being used and how it can be manipulated, which is a feature that is absent in popular Machine Learning methods
    corecore