254 research outputs found

    Ontology Learning Using Formal Concept Analysis and WordNet

    Full text link
    Manual ontology construction takes time, resources, and domain specialists. Supporting a component of this process for automation or semi-automation would be good. This project and dissertation provide a Formal Concept Analysis and WordNet framework for learning concept hierarchies from free texts. The process has steps. First, the document is Part-Of-Speech labeled, then parsed to produce sentence parse trees. Verb/noun dependencies are derived from parse trees next. After lemmatizing, pruning, and filtering the word pairings, the formal context is created. The formal context may contain some erroneous and uninteresting pairs because the parser output may be erroneous, not all derived pairs are interesting, and it may be large due to constructing it from a large free text corpus. Deriving lattice from the formal context may take longer, depending on the size and complexity of the data. Thus, decreasing formal context may eliminate erroneous and uninteresting pairs and speed up idea lattice derivation. WordNet-based and Frequency-based approaches are tested. Finally, we compute formal idea lattice and create a classical concept hierarchy. The reduced concept lattice is compared to the original to evaluate the outcomes. Despite several system constraints and component discrepancies that may prevent logical conclusion, the following data imply idea hierarchies in this project and dissertation are promising. First, the reduced idea lattice and original concept have commonalities. Second, alternative language or statistical methods can reduce formal context size. Finally, WordNet-based and Frequency-based approaches reduce formal context differently, and the order of applying them is examined to reduce context efficiently

    Head-Driven Phrase Structure Grammar

    Get PDF
    Head-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism)

    Head-Driven Phrase Structure Grammar

    Get PDF
    Head-Driven Phrase Structure Grammar (HPSG) is a constraint-based or declarative approach to linguistic knowledge, which analyses all descriptive levels (phonology, morphology, syntax, semantics, pragmatics) with feature value pairs, structure sharing, and relational constraints. In syntax it assumes that expressions have a single relatively simple constituent structure. This volume provides a state-of-the-art introduction to the framework. Various chapters discuss basic assumptions and formal foundations, describe the evolution of the framework, and go into the details of the main syntactic phenomena. Further chapters are devoted to non-syntactic levels of description. The book also considers related fields and research areas (gesture, sign languages, computational linguistics) and includes chapters comparing HPSG with other frameworks (Lexical Functional Grammar, Categorial Grammar, Construction Grammar, Dependency Grammar, and Minimalism)

    Proceedings of the 12th European Workshop on Natural Language Generation (ENLG 2009)

    Get PDF

    Towards generic relation extraction

    Get PDF
    A vast amount of usable electronic data is in the form of unstructured text. The relation extraction task aims to identify useful information in text (e.g., PersonW works for OrganisationX, GeneY encodes ProteinZ) and recode it in a format such as a relational database that can be more effectively used for querying and automated reasoning. However, adapting conventional relation extraction systems to new domains or tasks requires significant effort from annotators and developers. Furthermore, previous adaptation approaches based on bootstrapping start from example instances of the target relations, thus requiring that the correct relation type schema be known in advance. Generic relation extraction (GRE) addresses the adaptation problem by applying generic techniques that achieve comparable accuracy when transferred, without modification of model parameters, across domains and tasks. Previous work on GRE has relied extensively on various lexical and shallow syntactic indicators. I present new state-of-the-art models for GRE that incorporate governordependency information. I also introduce a dimensionality reduction step into the GRE relation characterisation sub-task, which serves to capture latent semantic information and leads to significant improvements over an unreduced model. Comparison of dimensionality reduction techniques suggests that latent Dirichlet allocation (LDA) – a probabilistic generative approach – successfully incorporates a larger and more interdependent feature set than a model based on singular value decomposition (SVD) and performs as well as or better than SVD on all experimental settings. Finally, I will introduce multi-document summarisation as an extrinsic test bed for GRE and present results which demonstrate that the relative performance of GRE models is consistent across tasks and that the GRE-based representation leads to significant improvements over a standard baseline from the literature. Taken together, the experimental results 1) show that GRE can be improved using dependency parsing and dimensionality reduction, 2) demonstrate the utility of GRE for the content selection step of extractive summarisation and 3) validate the GRE claim of modification-free adaptation for the first time with respect to both domain and task. This thesis also introduces data sets derived from publicly available corpora for the purpose of rigorous intrinsic evaluation in the news and biomedical domains

    Instance-based natural language generation

    Get PDF
    In recent years, ranking approaches to Natural Language Generation have become increasingly popular. They abandon the idea of generation as a deterministic decision¬ making process in favour of approaches that combine overgeneration with ranking at some stage in processing.In this thesis, we investigate the use of instance-based ranking methods for surface realization in Natural Language Generation. Our approach to instance-based Natural Language Generation employs two basic components: a rule system that generates a number of realization candidates from a meaning representation and an instance-based ranker that scores the candidates according to their similarity to examples taken from a training corpus. The instance-based ranker uses information retrieval methods to rank output candidates.Our approach is corpus-based in that it uses a treebank (a subset of the Penn Treebank II containing management succession texts) in combination with manual semantic markup to automatically produce a generation grammar. Furthermore, the corpus is also used by the instance-based ranker. The semantic annotation of a test portion of the compiled subcorpus serves as input to the generator.In this thesis, we develop an efficient search technique for identifying the optimal candidate based on the A*-algorithm, detail the annotation scheme and grammar con¬ struction algorithm and show how a Rete-based production system can be used for efficient candidate generation. Furthermore, we examine the output of the generator and discuss issues like input coverage (completeness), fluency and faithfulness that are relevant to surface generation in general

    Detecting grammatical errors with treebank-induced, probabilistic parsers

    Get PDF
    Today's grammar checkers often use hand-crafted rule systems that define acceptable language. The development of such rule systems is labour-intensive and has to be repeated for each language. At the same time, grammars automatically induced from syntactically annotated corpora (treebanks) are successfully employed in other applications, for example text understanding and machine translation. At first glance, treebank-induced grammars seem to be unsuitable for grammar checking as they massively over-generate and fail to reject ungrammatical input due to their high robustness. We present three new methods for judging the grammaticality of a sentence with probabilistic, treebank-induced grammars, demonstrating that such grammars can be successfully applied to automatically judge the grammaticality of an input string. Our best-performing method exploits the differences between parse results for grammars trained on grammatical and ungrammatical treebanks. The second approach builds an estimator of the probability of the most likely parse using grammatical training data that has previously been parsed and annotated with parse probabilities. If the estimated probability of an input sentence (whose grammaticality is to be judged by the system) is higher by a certain amount than the actual parse probability, the sentence is flagged as ungrammatical. The third approach extracts discriminative parse tree fragments in the form of CFG rules from parsed grammatical and ungrammatical corpora and trains a binary classifier to distinguish grammatical from ungrammatical sentences. The three approaches are evaluated on a large test set of grammatical and ungrammatical sentences. The ungrammatical test set is generated automatically by inserting common grammatical errors into the British National Corpus. The results are compared to two traditional approaches, one that uses a hand-crafted, discriminative grammar, the XLE ParGram English LFG, and one based on part-of-speech n-grams. In addition, the baseline methods and the new methods are combined in a machine learning-based framework, yielding further improvements

    Monolingual Sentence Rewriting as Machine Translation: Generation and Evaluation

    Get PDF
    In this thesis, we investigate approaches to paraphrasing entire sentences within the constraints of a given task, which we call monolingual sentence rewriting. We introduce a unified framework for monolingual sentence rewriting, and apply it to three representative tasks: sentence compression, text simplification, and grammatical error correction. We also perform a detailed analysis of the evaluation methodologies for each task, identify bias in common evaluation techniques, and propose more reliable practices. Monolingual rewriting can be thought of as translating between two types of English (such as from complex to simple), and therefore our approach is inspired by statistical machine translation. In machine translation, a large quantity of parallel data is necessary to model the transformations from input to output text. Parallel bilingual data naturally occurs between common language pairs (such as English and French), but for monolingual sentence rewriting, there is little existing parallel data and annotation is costly. We modify the statistical machine translation pipeline to harness monolingual resources and insights into task constraints in order to drastically diminish the amount of annotated data necessary to train a robust system. Our method generates more meaning-preserving and grammatical sentences than earlier approaches and requires less task-specific data. Once candidate sentences are generated, it is crucial to have reliable evaluation methods. Sentential paraphrases must fulfill a variety of requirements: preserve the meaning of the original sentence, be grammatical, and meet any stylistic or task-specific constraints. We analyze common evaluation practices and propose better methods that more accurately measure the quality of output. Often overlooked, robust automatic evaluation methodology is necessary for improving systems, and this work presents new metrics and outlines important considerations for reliably measuring the quality of the generated text

    Finding answers to definition questions on the web

    Get PDF
    Fundamentally, question answering systems are designed for automatically responding to queries posed by users in natural language. The first step in the answering process is query analysis, and its goal is to classify the query in concert with a set of pre-specified types. Traditionally, these classes include: factoid, definition, and list. Systems thereafter chose the answering method in congruence with the class recognised in this early phase. In short, this thesis focuses exclusively on strategies to tackle definition questions (e.g.\u27; Who is Ben Bernanke?"). This sort of question has become especially interesting in recent years, due to its significant number of submissions to search engines. Most advances in definition question answering have been made under the umbrella of the Text REtrieval Conference (TREC). This is, more precisely, a framework for testing systems operating on a collection of news articles. Thus, the objective of chapter one is to describe this framework along with presenting additional introductory aspects of definition question answering including: (a) how definition questions are prompted by individuals; (b) the different conceptions of definition, and thus of answers; and (c) the various metrics exploited for assessing systems. Since the inception of TREC, systems have put to the test manifold approaches to discover answers, throwing some light onto several key aspects of this problem. On this account, chapter four goes over a selection of some notable TREC systems. This selection is not aimed at completeness, but rather at highlighting the leading features of these systems. For the most part, systems benefit from knowledge bases (e.g., Wikipedia) for obtaining descriptions about the concept being defined (a.k.a. definiendum). These descriptions are thereafter projected onto the array of candidate answers as a means of discerning the correct answer. In other words, these knowledge bases play the role of annotated resources, and most systems attempt to find the answer candidates across the collection of news articles that are more similar to these descriptions. The cornerstone of this thesis is the assumption that it is plausible to devise competitive, and hopefully better, systems without the necessity of annotated resources. Although this descriptive knowledge is helpful, it is the belief of the author that they are built on two wrong premises: 1.It is arguable that senses or contexts related to the definiendum across knowledge bases are the same senses or contexts for the instances across the array of answer candidates. This observation also extends to the fact that not all descriptions within the group of putative answers are necessarily covered by knowledge bases, even though they might refer to the same contexts or senses. 2.Finding an efficient projection strategy does not necessarily entail a good procedure for discerning descriptive knowledge, because it shifts the goal of the task to a more like this set" instead of analysing whether or not each candidate bears the characteristics of a description. In other words, the coverage given by knowledge bases for a specific definiendum is not wide enough to learn all the characteristics that typify its descriptions, so that systems are capable of identifying all answers within the set of candidates. From another angle, a conventional projection methodology can be seen as a finder of lexical analogies. All in all, this thesis investigates into models that disregard this kind of annotated resource and projection strategy. In effect, it is the belief of the author that a robust technique of this sort can be integrated with traditional projection methodologies, and in this way bringing about an enhancement in performance. The major contributions of this thesis are presented in chapters five, six and seven. There are several ways of understanding this structure. For example, chapter five presents a general framework for answering definition questions in several languages. The primary goal of this study is to design a lightweight definition question answering system operating on web-snippets and two languages: English and Spanish. The idea is to utilise web-snippets as a source of descriptive information in several languages, and the high degree of language independency is achieved by making allowances for as little linguistic knowledge as possible. To put it more precisely, this system accounts for statistical methods and a list of stop-words, as well as a set of language-dependent definition patterns. In detail, chapter five branches into two more specific studies. The first study is essentially aimed at capitalising on redundancy for detecting answers (e.g., word frequency counts across answer candidates). Although this type of feature has been widely used by TREC systems, this study focuses on its impact on different languages, and its benefits when applied to web-snippets instead of a collection of news documents. An additional motivation behind targeting web-snippets is the hope of studying systems working on more heterogenous corpora, without incurring the need of downloading full-documents. For instance, on the Internet, the number of distinct senses for the definiendum considerably increases, ergo making it necessary to consider a sense discrimination technique. For this purpose, the system presented in this chapter takes advantage of an unsupervised approach premised on Latent Semantic Analysis. Although the outcome of this study shows that sense discrimination is hard to achieve when operating solely on web snippets, it also reveals that they are a fruitful source of descriptive knowledge, and that their extraction poses exciting challenges. The second branch extends this first study by exploiting multilingual knowledge bases (i.e. Wikipedia) for ranking putative answers. Generally speaking, it makes use of word association norms deduced from sentences that match definitions patterns across Wikipedia. In order to adhere to the premise of not profiting from articles related to a specific definiendum, these sentences are anonymised by replacing the concept with a placeholder, and the word norms are learnt from all training sentences, instead of only from the Wikipedia page about the particular definiendum. The results of this study signify that this use of these resources can also be beneficial; in particular, they reveal that word association norms are a cost-efficient solution. However, the size of the corpus markedly decreases for languages different from English, thus indicating their insufficiency to design models for other languages. Later, chapter six gets more specific and deals only with the ranking of answer candidates in English. The reason for abandoning the idea of Spanish is the sparseness observed across both the redundancy from the Internet and the training material mined from Wikipedia. This sparseness is considerably greater than in the case of English, and it makes learning powerful statistical models more difficult. This chapter presents a novel way of modeling definitions grounded on n-gram language models inferred from the lexicalised dependency tree representation of the training material acquired in the study of chapter five. These models are contextual in the sense that they are built in relation to the semantic of the sentence. Generally, these semantics can be perceived as the distinct types of definienda (e.g., footballer, language, artist, disease, and tree). This study, in addition, investigates the effect of some features on these context models (i.e., named entities, and part-of-speech tags). Overall, the results obtained by this approach are encouraging, in particular in terms of increasing the accuracy of the pattern matching. However, in all likelihood, it was experimentally observed that a training corpus comprising only positive examples (descriptions) is not enough to achieve perfect accuracy, because these models cannot deduce the characteristics that typify non-descriptive content. More essential, as future work, context models give the chance to study how different contexts can be amalgamated (smoothed) in agreement with their semantic similarities in order to ameliorate the performance. Subsequently, chapter seven gets even more specific and it searches for the set of properties that can aid in discriminating descriptions from other kinds of texts. Note that this study regards all kinds of descriptions, including those mismatching definition patters. In so doing, Maximum Entropy models are constructed on top of an automatically acquired large-scale training corpus, which encompasses descriptions from Wikipedia and non-descriptions from the Internet. Roughly speaking, different models are constructed as a means of studying the impact of assorted properties: surface, named entities, part-of-speech tags, chunks, and more interestingly, attributes derived from the lexicalised dependency graphs. In general, results corroborate the efficiency of features taken from dependency graphs, especially the root node and n-gram paths. Experiments conducted on testing sets of various characteristics suggest that it is also plausible to find attributes that can port to other corpora. The second and the third are extra chapters. The former examines different strategies to trawl the Web for descriptive knowledge. In essence, this chapter touches on several strategies geared towards boosting the recall of descriptive sentences across web snippets, especially sentences that match widespread definition patterns. This is a side, but instrumental study to the core of this thesis, as it is necessary for systems targeted at the Internet to develop effective crawling techniques. On the contrary, chapter three has two goals: (a) presenting some components used by the strategies outlined in the last three chapters, this way helping to focus on key aspects of the ranking methodologies, and hence to clearly present the relevant aspects of approaches laid out in these three chapters; and (b) fleshing out some characteristics that make separating the genuine from the misleading answer candidates difficult; particularly, across sentences matching definition patterns. Chapter three is helpful for understanding part of the linguistic phenomena that the posterior chapters deal with. On a final note about the organisation of this thesis, since there is a myriad of techniques, chapter six and seven start dissecting the related work closer to each strategy. The main contribution of each chapter begins at section 6.5 and 7.6, respectively. These two sections start with a discussion and comparison between the proposed methods and the related work presented in their corresponding preceding sections. This organisation is directed at facilitating the contextualisation of the proposed approaches as there are different question answering systems with manifold characteristics.Frage-Antwort-Systeme sind im Wesentlichen dafür konzipiert, von Benutzern in natürlicher Sprache gestellte Anfragen automatisiert zu beantworten. Der erste Schritt im Beantwortungsprozess ist die Analyse der Anfrage, deren Ziel es ist, die Anfrage entsprechend einer Menge von vordefinierten Typen zu klassifizieren. Traditionell umfassen diese: Faktoid, Definition und Liste. Danach wählten die Systeme dieser frühen Phase die Antwortmethode entsprechend der zuvor erkannten Klasse. Kurz gesagt konzentriert sich diese Arbeit ausschließlich auf Strategien zur Lösung von Fragen nach Definitionen (z.B. ,,emph{Wer ist Ben Bernanke?}"). Diese Art von Anfrage ist in den letzten Jahren besonders interessant geworden, weil sie in beachtlicher Zahl bei Suchmaschinen eingeht. Die meisten Fortschritte in Bezug auf die Beantwortung von Fragen nach Definitionen wurden unter dem Dach der Text Retrieval Conference (TREC) gemacht. Das ist, genauer gesagt, ein Framework zum Testen von Systemen, die mit einer Auswahl von Zeitungsartikeln arbeiten. Daher, zielt Kapitel eins auf eine Beschreibung dieses Rahmenwerks ab, zusammen mit einer Darstellung weiterer einführender Aspekte der Beantwortung von Definitionsanfragen. Diesen schließen u.a. ein: (a) wie Definitionsanfragen von Personen gestellt werden; (b) die unterschiedlichen Begriffe von Definition und folglich auch Antworten; und (c) die unterschiedlichen Metriken, die zur Bewertung von Systemen genutzt werden. Seit Anbeginn von TREC haben Systeme vielfältige Ansätze, Antworten zu entdecken, auf die Probe gestellt und dabei eine Reihe von zentralen Aspekten dieses Problems beleuchtet. Aus diesem Grund behandelt Kapitel vier eine Auswahl einiger bekannter TREC Systeme. Diese Auswahl zielt nicht auf Vollständigkeit ab, sondern darauf, die wesentlichen Merkmale dieser Systeme hervorzuheben. Zum größten Teil nutzen die Systeme Wissensbasen (wie z.B. Wikipedia), um Beschreibungen des zu definierenden Konzeptes (auch als Definiendum bezeichnet) zu erhalten. Diese Beschreibungen werden danach auf eine Reihe von möglichen Antworten projiziert, um auf diese Art die richtige Antwort zu ermitteln. Anders ausgedrückt nehmen diese Wissensbasen die Funktion von annotierten Ressourcen ein, wobei die meisten Systeme versuchen, die Antwortkandidaten in einer Sammlung von Zeitungsartikeln zu finden, die diesen Beschreibungen ähnlicher sind. Den Grundpfeiler dieser Arbeit bildet die Annahme, dass es plausibel ist, ohne annotierte Ressourcen konkurrenzfähige, und hoffentlich bessere, Systeme zu entwickeln. Obwohl dieses deskriptive Wissen hilfreich ist, basieren sie nach Überzeugung des Autors auf zwei falschen Annahmen: 1. Es ist zweifelhaft, ob die Bedeutungen oder Kontexte, auf die sich das Definiendum bezieht, dieselben sind wie die der Instanzen in der Reihe der Antwortkandidaten. Darüber hinaus erstreckt sich diese Beobachtung auch auf die Tatsache, dass nicht alle Beschreibungen innerhalb der Gruppe der mutmaßlichen Antworten notwendigerweise von Wissensbasen abgedeckt werden, auch wenn sie sich auf dieselben Bedeutungen und Kontexte beziehen. 2. Eine effiziente Projektionsstrategie zu finden bedeutet nicht notwendigerweise auch ein gutes Verfahren zur Feststellung von deskriptivem Wissen, denn es verschiebt die Zielsetzung der Aufgabe hin zu einem ,,mehr wie diese Menge" statt zu analysieren, ob jeder Kandidat den Charakteristika einer Beschreibung entspricht oder nicht. Anders ausgedrückt ist die Abdeckung, die durch Wissensbasen für ein spezifisches Definiendum gegeben ist, nicht umfassend genug, um alle Charakteristika, die für seine Beschreibungen kennzeichnend sind, zu erlernen, so dass die Systeme in der Lage sind, alle Antworten innerhalb der Kandidatenmenge zu identifizieren. Eine konventionelle Projektionsstrategie kann aus einem anderen Blickwinkel als Prozedur zum Finden lexikalischer Analogien betrachtet werden. Insgesamt untersucht diese Arbeit Modelle, die Strategien dieser Art in Verbindung mit annotierten Ressourcen und Projektion außer Acht lassen. Tatsächlich ist es die Überzeugung des Autors, dass eine robuste Technik dieser Art mit traditionellen Methoden der Projektion integriert wird und so eine Leistungssteigerung ermöglichen kann. Die größeren Beiträge dieser Arbeit werden in den Kapiteln fünf, sechs und sieben präsentiert. Es gibt mehrere Wege diese Struktur zu verstehen. Kapitel fünf, beispielsweise, präsentiert einen allgemeinen Rahmen für die Beantwortung von Fragen nach Definitionen in mehreren Sprachen. Das primäre Ziel dieser Studie ist es, ein leichtgewichtiges System zur Beantwortung von Fragen nach Definitionen zu entwickeln, das mit Web-Snippets und zwei Sprachen arbeitet: Englisch und Spanisch. Die Grundidee ist, von Web-Snippets als Quelle deskriptiver Information in mehreren Sprachen zu profitieren, wobei der hohe Grad an Sprachunabhängigkeit dadurch erreicht wird, dass so wenig linguistisches Wissen wie möglich berücksichtigt wird. Genauer gesagt berücksichtigt dieses System statistische Methoden und eine Liste von Stop-Wörtern sowie eine Reihe von sprach-spezifischen Definitionsmustern. Im Einzelnen teilt sich Kapitel fünf in zwei spezifischere Studien auf. Die erste Studie zielt im Grunde darauf ab, aus Redundanz für die Ermittlung von Antworten Kapital zu schlagen (z.B. Worthäufigkeiten über verschiedene Antwortkandidaten hinweg). Obwohl eine solche Eigenschaft unter TREC Systemen weit verbreitet ist, legt diese Studie den Schwerpunkt auf die Auswirkungen auf verschiedene Sprachen und auf ihre Vorteile bei der Anwendung auf Web-Snippets statt Zeitungsartikeln. Eine weitere Motivation dahinter, Web-Snippets ins Auge zu fassen, ist die Hoffnung, Systeme zu studieren, die mit heterogenen Corpora arbeiten, ohne es nötig zu machen, vollständige Dokumente herunter zu laden. Im Internet, beispielsweise, steigt die Zahl verschiedener Bedeutungen für das Definiendum deutlich an, was es notwendig macht, eine Technik zur Unterscheidung von Bedeutungen in Betracht zu ziehen. Zu diesem Zweck nutzt das System, das in diesem Kapitel vorgestellt wird, einen unüberwachten Ansatz, der auf der Latent Semantic Analysis basiert. Auch wenn das Ergebnis dieser Studie zeigt, dass die Unterscheidung von Bedeutungen allein anhand von Web-Snippets schwer zu erreichen ist, so lässt es doch auch erkennen, dass sie eine fruchtbare Quelle deskriptiven Wissens darstellen und dass ihre Extraktion spannende Herausforderungen bereithält. Der zweite Teil erweitert diese erste Studie durch die Nutzung mehrsprachiger Wissensbasen (d.h. Wikipedia), um die möglichen Antworten in eine Rangfolge einzureihen. Allgemein ausgedrückt profitiert sie von Wortassoziationsnormen, die von Sätzen gelernt werden, die über Wikipedia hinweg zu Definitionsmustern passen. Um an der Prämisse festzuhalten, keine Artikel mit Bezug auf eine spezifisches Definiendum zu nutzen, werden diese Sätze anonymisiert, indem der Begriff mit einem Platzhalter ersetzt wird, und die Wortnormen werden von allen Sätzen der Trainingsmenge gelernt, statt nur von dem Wikipedia-Artikel, der sich auf das spezielle Definiendum bezieht. Die Ergebnisse dieser Studie zeigen, dass diese Nutzung dieser Ressourcen ebenfalls vorteilhaft sein kann; speziell zeigen sie auf, dass Wortassoziationsnormen eine kosteneffiziente Lösung darstellen. Allerdings nehmen die Corpusgrößen über andere Sprachen als Englisch deutlich ab, was auf deren Unzulänglichkeit für die Konstruktion von Modellen für andere Sprachen hinweist. Kapitel sechs, weiter hinten, wird spezieller und handelt ausschließlich von der Einordnung von Antwortkandidaten in englischer Sprache in eine Rangfolge. Der Grund dafür, hier Spanisch außer Acht zu lassen, ist die geringe beobachtete Dichte, sowohl in Bezug auf redundante Information im Internet als auch in Bezug auf Trainingsmaterial, das von Wikipedia erworben wurde. Diese geringe Dichte ist deutlich stärker ausgeprägt als im Fall der englischen Sprache und erschwert das Erlernen mächtiger statischer Modelle. Dieses Kapitel präsentiert einen neuartigen Weg, Definitionen zu modellieren, die in n-gram Sprachmodellen verankert sind, die aus der lexikalisierten Darstellung des Abhängigkeitsbaumes des in Kapitel fünf erworbenen Trainingsmaterials gelernt wurden. Diese Modelle sind kontextuell in dem Sinne, dass sie in Bezug auf die Semantikdes Satzes konstruiert werden. Im Allgemeinen können diese Semantiken als unterschiedliche Typen von Definienda betrachtet werden (z.B. Fußballer, Sprache, Künstler, Krankheit und Baum). Diese Studie untersucht zusätzlich die Auswirkungen einiger Eigenschaften (nämlich benannter Entitäten und Part-of-speech-Tags) auf diese Kontextmodelle. Insgesamt sind die Ergebnisse, die mit diesem Ansatz erhalten wurden, ermutigend, insbesondere in Bezug auf eine Steigerung der Genauigkeit des Musterabgleichs. Indes wurde höchstwahrscheinlich experimentell beobachtet, dass ein Trainingscorpus, das nur Positivbeispiele (Beschreibungen) enthält, nicht ausreicht, um perfekte Genauigkeit zu erreichen, da diese Modelle die Charakteristika nicht ableiten können, die für nicht-deskriptiven Inhalt kennzeichnend sind. Für die weitere Arbeit ermöglichen es Kontextmodelle zu untersuchen, wie unterschiedliche Kontexte in Übereinstimmung mit deren semantischen Ähnlichkeiten verschmolzen (geglättet) werden können, um die Leistung zu verstärken. Kapitel sieben wird anschließend sogar noch spezieller und sucht nach der Menge von Eigenschaften, die dabei helfen kann, Beschreibungen von anderen Textarten zu unterscheiden. Dabei sollte beachtet werden, dass diese Studie alle Arten von Beschreibungen berücksichtigt, einschließlich derer, die Definitionsmustern nicht genügen. Dadurch werden Maximum-Entropy-Modelle konstruiert, die auf einen automatisch akquirierten Corpus von großem Umfang aufsetzen, der Beschreibungen von Wikipedia und Nicht-Beschreibungen aus dem Internet umfasst. Grob gesagt werden unterschiedliche Modelle konstruiert, um die Auswirkungen verschiedenerlei Merkmale zu untersuchen: Oberfläche, benannte Entitäten, Part-of-speech-Tags, Chunks und, noch interessanter, von den lexikalisierten Abhängigkeitsgraphen abgeleitete Attribute. Im Allgemeinen bestätigen die Ergebnisse die Effizienz von Merkmalen, die Abhängigkeitsgraphen entnommen sind, insbesondere Wurzelknoten und n-gram-Pfaden. Experimente, die mit verschiedenen Testmengen diverser Charakteristika durchgeführt wurden, legen nahe, dass auch angenommen werden kann, dass Attribute gefunden werden, die sich auf andere Corpora übertragen lassen. Es gibt zwei weitere Kapitel: zwei und drei. Ersteres untersucht unterschiedliche Strategien, das Netz nach deskriptivem Wissen zu durchforsten. Im Wesentlichen analysiert dieses Kapitel einige Strategien, die darauf abzielen, die Trefferquote (den Recall) deskriptiver Sätze
    corecore