10 research outputs found

    Efficient Non-deterministic Search in Structured Prediction: A Case Study on Syntactic Parsing

    Get PDF
    Non-determinism occurs naturally in many search-based machine learning and natural language processing (NLP) problems. For example, the goal of parsing is to construct the syntactic tree structure of a sentence given a grammar. Agenda-based parsing is a dynamic programming approach to find the most likely syntactic tree of a sentence according to a probabilistic grammar. A chart is used to maintain all the possible subtrees for different spans in the sentence and an agenda is used to rank all the constituents. The parser chooses only one constituent from the agenda per step. Non-determinism occurs naturally in agenda-based parsing since the new constituent is often built by combining items from a few steps earlier. Unfortunately, like most other problems in NLP, the size of the search space is huge and exhaustive search is impossible. However, users expect a fast and accurate system. In this dissertation, I focus on the question of ``Why, when, and how shall we take advantage of non-determinism?'' and show its efficacy to improve the parser in terms of speed and/or accuracy. Existing approaches like search-based imitation learning or reinforcement learning methods have different limitations when it comes to a large NLP system. The solution proposed in this dissertation is ``We should train the system non-deterministically and test it deterministically if possible.'' and I also show that ``it is better to learn with oracles than simple heuristics''. We start by solving a generic Markov Decision Process with a non-deterministic agent. We show its theoretical convergence guarantees and verify its efficiency on maze solving problems. Then we focus on agenda-based parsing. To re-prioritize the parser, we model a decoding problem as a Markov Decision Process with a large state/action space. We discuss the advantages/disadvantages of existing techniques and propose a hybrid reinforcement/apprenticeship learning algorithm to trade off speed and accuracy. We also propose to use a dynamic pruner with features that depend on the run-time status of the chart and agenda and analyze the importance of those features in the pruning classification. Our models show comparable results with respect to start-of-the-art strategies

    Statistical relational learning of semantic models and grammar rules for 3D building reconstruction from 3D point clouds

    Get PDF
    Formal grammars are well suited for the estimation of models with an a-priori unknown number of parameters such as buildings and have proven their worth for 3D modeling and reconstruction of cities. However, the generation and design of corresponding grammar rules is a laborious task and relies on expert knowledge. This thesis presents novel approaches for the reduction of this effort using advanced machine learning methods resulting in automatically learned sophisticated grammar rules. Indeed, the learning of a wide range of sophisticated rules, that reflect the variety and complexity, is a challenging task. This is especially the case if a simultaneous machine learning of building structures and the underlying aggregation hierarchies as well as the building parameters and the constraints among them for a semantic interpretation is expected. Thus, in this thesis, an incremental approach is followed. It separates the structure learning from the parameter distribution learning of building parts. Moreover, the so far procedural approaches with formal grammars are mostly rather convenient for the generation of virtual city models than for the reconstruction of existing buildings. To this end, Inductive Logic Programming (ILP) techniques are transferred and applied for the first time in the field of 3D building modeling. This enables the automatic learning of declarative logic programs, which are equivalent to attribute grammars and separate the representation of buildings and their parts from the reconstruction task. A stepwise bottom-up learning, starting from the smallest atomic features of a building part together with the semantic, topological and geometric constraints, is a key to a successful learning of a whole building part. Only few examples are sufficient to learn from precise as well as noisy observations. The learning from uncertain data is realized using probability density functions, decision trees and uncertain projective geometry. This enables the handling and modeling of uncertain topology and geometric reasoning taking noise into consideration. The uncertainty of models itself is also considered. Therefore, a novel method is developed for the learning of Weighted Attribute Context-Free Grammar (WACFG). On the one hand, the structure learning of façades – context-free part of the Grammar – is performed based on annotated derivation trees using specific Support Vector Machines (SVMs). The latter are able to derive probabilistic models from structured data and to predict a most likely tree regarding to given observations. On the other hand, to the best of my knowledge, Statistical Relational Learning (SRL), especially Markov Logic Networks (MLNs), are applied for the first time in order to learn building part (shape and location) parameters as well as the constraints among these parts. The use of SRL enables to take profit from the elegant logical relational description and to benefit from the efficiency of statistical inference methods. In order to model latent prior knowledge and exploit the architectural regularities of buildings, a novel method is developed for the automatic identification of translational as well as axial symmetries. For symmetry identification a supervised machine learning approach is followed based on an SVM classifier. Building upon the classification results, algorithms are designed for the representation of symmetries using context-free grammars from authoritative building footprints. In all steps the machine learning is performed based on real- world data such as 3D point clouds and building footprints. The handling with uncertainty and occlusions is assured. The presented methods have been successfully applied on real data. The belonging classification and reconstruction results are shown.Statistisches relationales Lernen von semantischen Modellen und Grammatikregeln für 3D Gebäuderekonstruktion aus 3D Punktwolken Formale Grammatiken eignen sich sehr gut zur Schätzung von Modellen mit a-priori unbekannter Anzahl von Parametern und haben sich daher als guter Ansatz zur Rekonstruktion von Städten mittels 3D Stadtmodellen bewährt. Der Entwurf und die Erstellung der dazugehörigen Grammatikregeln benötigt jedoch Expertenwissen und ist mit großem Aufwand verbunden. Im Rahmen dieser Arbeit wurden Verfahren entwickelt, die diesen Aufwand unter Zuhilfenahme von leistungsfähigen Techniken des maschinellen Lernens reduzieren und automatisches Lernen von Regeln ermöglichen. Das Lernen umfangreicher Grammatiken, die die Vielfalt und Komplexität der Gebäude und ihrer Bestandteile widerspiegeln, stellt eine herausfordernde Aufgabe dar. Dies ist insbesondere der Fall, wenn zur semantischen Interpretation sowohl das Lernen der Strukturen und Aggregationshierarchien als auch von Parametern der zu lernenden Objekte gleichzeitig statt finden soll. Aus diesem Grund wird hier ein inkrementeller Ansatz verfolgt, der das Lernen der Strukturen vom Lernen der Parameterverteilungen und Constraints zielführend voneinander trennt. Existierende prozedurale Ansätze mit formalen Grammatiken sind eher zur Generierung von synthetischen Stadtmodellen geeignet, aber nur bedingt zur Rekonstruktion existierender Gebäude nutzbar. Hierfür werden in dieser Schrift Techniken der Induktiven Logischen Programmierung (ILP) zum ersten Mal auf den Bereich der 3D Gebäudemodellierung übertragen. Dies führt zum Lernen deklarativer logischer Programme, die hinsichtlich ihrer Ausdrucksstärke mit attributierten Grammatiken gleichzusetzen sind und die Repräsentation der Gebäude von der Rekonstruktionsaufgabe trennen. Das Lernen von zuerst disaggregierten atomaren Bestandteilen sowie der semantischen, topologischen und geometrischen Beziehungen erwies sich als Schlüssel zum Lernen der Gesamtheit eines Gebäudeteils. Das Lernen erfolgte auf Basis einiger weniger sowohl präziser als auch verrauschter Beispielmodelle. Um das Letztere zu ermöglichen, wurde auf Wahrscheinlichkeitsdichteverteilungen, Entscheidungsbäumen und unsichere projektive Geometrie zurückgegriffen. Dies erlaubte den Umgang mit und die Modellierung von unsicheren topologischen Relationen sowie unscharfer Geometrie. Um die Unsicherheit der Modelle selbst abbilden zu können, wurde ein Verfahren zum Lernen Gewichteter Attributierter Kontextfreier Grammatiken (Weighted Attributed Context-Free Grammars, WACFG) entwickelt. Zum einen erfolgte das Lernen der Struktur von Fassaden –kontextfreier Anteil der Grammatik – aus annotierten Herleitungsbäumen mittels spezifischer Support Vektor Maschinen (SVMs), die in der Lage sind, probabilistische Modelle aus strukturierten Daten abzuleiten und zu prädizieren. Zum anderen wurden nach meinem besten Wissen Methoden des statistischen relationalen Lernens (SRL), insbesondere Markov Logic Networks (MLNs), erstmalig zum Lernen von Parametern von Gebäuden sowie von bestehenden Relationen und Constraints zwischen ihren Bestandteilen eingesetzt. Das Nutzen von SRL erlaubt es, die eleganten relationalen Beschreibungen der Logik mit effizienten Methoden der statistischen Inferenz zu verbinden. Um latentes Vorwissen zu modellieren und architekturelle Regelmäßigkeiten auszunutzen, ist ein Verfahren zur automatischen Erkennung von Translations- und Spiegelsymmetrien und deren Repräsentation mittels kontextfreier Grammatiken entwickelt worden. Hierfür wurde mittels überwachtem Lernen ein SVM-Klassifikator entwickelt und implementiert. Basierend darauf wurden Algorithmen zur Induktion von Grammatikregeln aus Grundrissdaten entworfen

    An FPGA-based syntactic parser for large size real-life context-free grammars

    Get PDF
    This thesis is at the crossroad between Natural Language Processing (NLP) and digital circuit design. It aims at delivering a custom hardware coprocessor for accelerating natural language parsing. The coprocessor has to parse real-life natural language and is targeted to be useful in several NLP applications that are time constrained or need to process large amounts of data. More precisely, the three goals of this thesis are: (1) to propose an efficient FPGA-based coprocessor for natural language syntactic analysis that can deal with inputs in the form of word lattices, (2) to implement the coprocessor in a hardware tool ready for integration within an ordinary desktop computer and (3) to offer an interface (i.e. software library) between the hardware tool and a potential natural language software application, running on the desktop computer. The Field Programmable Gate Array (FPGA) technology has been chosen as the core of the coprocessor implementation due to its ability to efficiently exploit all levels of parallelism available in the implemented algorithms in a cost-effective solution. In addition, the FPGA technology makes it possible to efficiently design and test such a hardware coprocessor. A final reason is that the future general-purpose processors are expected to contain reconfigurable resources. In such a context, an IP core implementing an efficient context-free parser ready to be configured within the reconfigurable resources of the general-purpose processor would be a support for any application relying on context-free parsing and running on that general-purpose processor. The context-free grammar parsing algorithms that have been implemented are the standard CYK algorithm and an enhanced version of the CYK algorithm developed at the EPFL Artificial Intelligence Laboratory. These algorithms were selected (1) due to their intrinsic properties of regular data flow and data processing that make them well suited for a hardware implementation, (2) for their property of producing partial parse trees which makes them adapted for further shallow parsing and (3) for being able to parse word lattices

    Empirical machine translation and its evaluation

    Get PDF
    Aquesta tesi estudia l'aplicació de les tecnologies del Processament del Llenguatge Natural disponibles actualment al problema de la Traducció Automàtica basada en Mètodes Empírics i la seva Avaluació.D'una banda, tractem el problema de l'avaluació automàtica. Hem analitzat les principals deficiències dels mètodes d'avaluació actuals, les quals es deuen, al nostre parer, als principis de qualitat superficials en els que es basen. En comptes de limitar-nos al nivell lèxic, proposem una nova direcció cap a avaluacions més heterogènies. El nostre enfocament es basa en el disseny d'un ric conjunt de mesures automàtiques destinades a capturar un ampli ventall d'aspectes de qualitat a diferents nivells lingüístics (lèxic, sintàctic i semàntic). Aquestes mesures lingüístiques han estat avaluades sobre diferents escenaris. El resultat més notable ha estat la constatació de que les mètriques basades en un coneixement lingüístic més profund (sintàctic i semàntic) produeixen avaluacions a nivell de sistema més fiables que les mètriques que es limiten a la dimensió lèxica, especialment quan els sistemes avaluats pertanyen a paradigmes de traducció diferents. Tanmateix, a nivell de frase, el comportament d'algunes d'aquestes mètriques lingüístiques empitjora lleugerament en comparació al comportament de les mètriques lèxiques. Aquest fet és principalment atribuïble als errors comesos pels processadors lingüístics. A fi i efecte de millorar l'avaluació a nivell de frase, a més de recòrrer a la similitud lèxica en absència d'anàlisi lingüística, hem estudiat la possibiliat de combinar les puntuacions atorgades per mètriques a diferents nivells lingüístics en una sola mesura de qualitat. S'han presentat dues estratègies no paramètriques de combinació de mètriques, essent el seu principal avantatge no haver d'ajustar la contribució relativa de cadascuna de les mètriques a la puntuació global. A més, el nostre treball mostra com fer servir el conjunt de mètriques heterogènies per tal d'obtenir detallats informes d'anàlisi d'errors automàticament.D'altra banda, hem estudiat el problema de la selecció lèxica en Traducció Automàtica Estadística. Amb aquesta finalitat, hem construit un sistema de Traducció Automàtica Estadística Castellà-Anglès basat en -phrases', i hem iterat en el seu cicle de desenvolupament, analitzant diferents maneres de millorar la seva qualitat mitjançant la incorporació de coneixement lingüístic. En primer lloc, hem extès el sistema a partir de la combinació de models de traducció basats en anàlisi sintàctica superficial, obtenint una millora significativa. En segon lloc, hem aplicat models de traducció discriminatius basats en tècniques d'Aprenentatge Automàtic. Aquests models permeten una millor representació del contexte de traducció en el que les -phrases' ocorren, efectivament conduint a una millor selecció lèxica. No obstant, a partir d'avaluacions automàtiques heterogènies i avaluacions manuals, hem observat que les millores en selecció lèxica no comporten necessàriament una millor estructura sintàctica o semàntica. Així doncs, la incorporació d'aquest tipus de prediccions en el marc estadístic requereix, per tant, un estudi més profund.Com a qüestió complementària, hem estudiat una de les principals crítiques en contra dels sistemes de traducció basats en mètodes empírics, la seva forta dependència del domini, i com els seus efectes negatius poden ésser mitigats combinant adequadament fonts de coneixement externes. En aquest sentit, hem adaptat amb èxit un sistema de traducció estadística Anglès-Castellà entrenat en el domini polític, al domini de definicions de diccionari.Les dues parts d'aquesta tesi estan íntimament relacionades, donat que el desenvolupament d'un sistema real de Traducció Automàtica ens ha permès viure en primer terme l'important paper dels mètodes d'avaluació en el cicle de desenvolupament dels sistemes de Traducció Automàtica.In this thesis we have exploited current Natural Language Processing technology for Empirical Machine Translation and its Evaluation.On the one side, we have studied the problem of automatic MT evaluation. We have analyzed the main deficiencies of current evaluation methods, which arise, in our opinion, from the shallow quality principles upon which they are based. Instead of relying on the lexical dimension alone, we suggest a novel path towards heterogeneous evaluations. Our approach is based on the design of a rich set of automatic metrics devoted to capture a wide variety of translation quality aspects at different linguistic levels (lexical, syntactic and semantic). Linguistic metrics have been evaluated over different scenarios. The most notable finding is that metrics based on deeper linguistic information (syntactic/semantic) are able to produce more reliable system rankings than metrics which limit their scope to the lexical dimension, specially when the systems under evaluation are different in nature. However, at the sentence level, some of these metrics suffer a significant decrease, which is mainly attributable to parsing errors. In order to improve sentence-level evaluation, apart from backing off to lexical similarity in the absence of parsing, we have also studied the possibility of combining the scores conferred by metrics at different linguistic levels into a single measure of quality. Two valid non-parametric strategies for metric combination have been presented. These offer the important advantage of not having to adjust the relative contribution of each metric to the overall score. As a complementary issue, we show how to use the heterogeneous set of metrics to obtain automatic and detailed linguistic error analysis reports.On the other side, we have studied the problem of lexical selection in Statistical Machine Translation. For that purpose, we have constructed a Spanish-to-English baseline phrase-based Statistical Machine Translation system and iterated across its development cycle, analyzing how to ameliorate its performance through the incorporation of linguistic knowledge. First, we have extended the system by combining shallow-syntactic translation models based on linguistic data views. A significant improvement is reported. This system is further enhanced using dedicated discriminative phrase translation models. These models allow for a better representation of the translation context in which phrases occur, effectively yielding an improved lexical choice. However, based on the proposed heterogeneous evaluation methods and manual evaluations conducted, we have found that improvements in lexical selection do not necessarily imply an improved overall syntactic or semantic structure. The incorporation of dedicated predictions into the statistical framework requires, therefore, further study.As a side question, we have studied one of the main criticisms against empirical MT systems, i.e., their strong domain dependence, and how its negative effects may be mitigated by properly combining outer knowledge sources when porting a system into a new domain. We have successfully ported an English-to-Spanish phrase-based Statistical Machine Translation system trained on the political domain to the domain of dictionary definitions.The two parts of this thesis are tightly connected, since the hands-on development of an actual MT system has allowed us to experience in first person the role of the evaluation methodology in the development cycle of MT systems

    The Future of Information Sciences : INFuture2009 : Digital Resources and Knowledge Sharing

    Get PDF
    corecore