532 research outputs found
Discovery of Linguistic Relations Using Lexical Attraction
This work has been motivated by two long term goals: to understand how humans
learn language and to build programs that can understand language. Using a
representation that makes the relevant features explicit is a prerequisite for
successful learning and understanding. Therefore, I chose to represent
relations between individual words explicitly in my model. Lexical attraction
is defined as the likelihood of such relations. I introduce a new class of
probabilistic language models named lexical attraction models which can
represent long distance relations between words and I formalize this new class
of models using information theory.
Within the framework of lexical attraction, I developed an unsupervised
language acquisition program that learns to identify linguistic relations in a
given sentence. The only explicitly represented linguistic knowledge in the
program is lexical attraction. There is no initial grammar or lexicon built in
and the only input is raw text. Learning and processing are interdigitated. The
processor uses the regularities detected by the learner to impose structure on
the input. This structure enables the learner to detect higher level
regularities. Using this bootstrapping procedure, the program was trained on
100 million words of Associated Press material and was able to achieve 60%
precision and 50% recall in finding relations between content-words. Using
knowledge of lexical attraction, the program can identify the correct relations
in syntactically ambiguous sentences such as ``I saw the Statue of Liberty
flying over New York.''Comment: dissertation, 56 page
Exploiting word embeddings for modeling bilexical relations
There has been an exponential surge of text data in the recent years. As a consequence, unsupervised methods that make use of this data have been steadily growing in the field of natural language processing (NLP). Word embeddings are low-dimensional vectors obtained using unsupervised techniques on the large unlabelled corpora, where words from the vocabulary are mapped to vectors of real numbers. Word embeddings aim to capture syntactic and semantic properties of words.
In NLP, many tasks involve computing the compatibility between lexical items under some linguistic relation. We call this type of relation a bilexical relation. Our thesis defines statistical models for bilexical relations
that centrally make use of word embeddings. Our principle aim is that the word embeddings will favor generalization to words not seen during the training of the model.
The thesis is structured in four parts. In the first part of this thesis, we present a bilinear model over word embeddings that leverages a small supervised dataset for a binary linguistic relation. Our learning algorithm exploits low-rank bilinear forms and induces a low-dimensional embedding tailored for a target linguistic relation. This results in compressed task-specific embeddings.
In the second part of our thesis, we extend our bilinear model to a ternary
setting and propose a framework for resolving prepositional phrase attachment ambiguity using word embeddings. Our models perform competitively with state-of-the-art models. In addition, our method obtains significant improvements on out-of-domain tests by simply using word-embeddings induced from source and target domains.
In the third part of this thesis, we further extend the bilinear models for expanding vocabulary in the context of statistical phrase-based machine translation. Our model obtains a probabilistic list of possible translations of target language words, given a word in the source language. We do this by projecting pre-trained embeddings into a common subspace using a log-bilinear model. We empirically notice a significant improvement on an out-of-domain test set.
In the final part of our thesis, we propose a non-linear model that maps initial word embeddings to task-tuned word embeddings, in the context of a neural network dependency parser. We demonstrate its use for improved dependency parsing, especially for sentences with unseen words. We also show downstream improvements on a sentiment analysis task.En els darrers anys hi ha hagut un sorgiment notable de dades en format textual. ConseqĂĽentment, en el camp del Processament del Llenguatge Natural (NLP, de l'anglès "Natural Language Processing") s'han desenvolupat mètodes no supervistats que fan Ăşs d'aquestes dades. Els anomenats "word embeddings", o embeddings de paraules, sĂłn vectors de dimensionalitat baixa que s'obtenen mitjançant tècniques no supervisades aplicades a corpus textuals de grans volums. Com a resultat, cada paraula del diccionari es correspon amb un vector de nombres reals, el propòsit del qual Ă©s capturar propietats sintĂ ctiques i semĂ ntiques de la paraula corresponent. Moltes tasques de NLP involucren calcular la compatibilitat entre elements lèxics en l'Ă mbit d'una relaciĂł lingĂĽĂstica. D'aquest tipus de relaciĂł en diem relaciĂł bilèxica. Aquesta tesi proposa models estadĂstics per a relacions bilèxiques que fan Ăşs central d'embeddings de paraules, amb l'objectiu de millorar la generalitzaciĂł del model lingĂĽĂstic a paraules no vistes durant l'entrenament. La tesi s'estructura en quatre parts. A la primera part presentem un model bilineal sobre embeddings de paraules que explota un conjunt petit de dades anotades sobre una relaxiĂł bilèxica. L'algorisme d'aprenentatge treballa amb formes bilineals de poc rang, i indueix embeddings de poca dimensionalitat que estan especialitzats per la relaciĂł bilèxica per la qual s'han entrenat. Com a resultat, obtenim embeddings de paraules que corresponen a compressions d'embeddings per a una relaciĂł determinada. A la segona part de la tesi proposem una extensiĂł del model bilineal a trilineal, i amb això proposem un nou model per a resoldre ambigĂĽitats de sintagmes preposicionals que usa nomĂ©s embeddings de paraules. En una sèrie d'avaluaciĂłns, els nostres models funcionen de manera similar a l'estat de l'art. A mĂ©s, el nostre mètode obtĂ© millores significatives en avaluacions en textos de dominis diferents al d'entrenament, simplement usant embeddings induĂŻts amb textos dels dominis d'entrenament i d'avaluaciĂł. A la tercera part d'aquesta tesi proposem una altra extensiĂł dels models bilineals per ampliar la cobertura lèxica en el context de models estadĂstics de traducciĂł automĂ tica. El nostre model probabilĂstic obtĂ©, donada una paraula en la llengua d'origen, una llista de possibles traduccions en la llengua de destĂ. Fem això mitjançant una projecciĂł d'embeddings pre-entrenats a un sub-espai comĂş, usant un model log-bilineal. EmpĂricament, observem una millora significativa en avaluacions en dominis diferents al d'entrenament. Finalment, a la quarta part de la tesi proposem un model no lineal que indueix una correspondència entre embeddings inicials i embeddings especialitzats, en el context de tasques d'anĂ lisi sintĂ ctica de dependències amb models neuronals. Mostrem que aquest mètode millora l'analisi de dependències, especialment en oracions amb paraules no vistes durant l'entrenament. TambĂ© mostrem millores en un tasca d'anĂ lisi de sentiment
Parsing and Evaluation. Improving Dependency Grammars Accuracy. Anà lisi Sintà ctica Automà tica i Avaluació. Millora de qualitat per a Gramà tiques de Dependències
Because parsers are still limited in analysing specific ambiguous constructions, the research presented in this thesis mainly aims to contribute to the improvement of parsing performance when it has knowledge integrated in order to deal with ambiguous linguistic phenomena. More precisely, this thesis intends to provide empirical solutions to the disambiguation of prepositional phrase attachment and argument recognition in order to assist parsers in generating a more accurate syntactic analysis. The disambiguation of these two highly ambiguous linguistic phenomena by the integration of knowledge about the language necessarily relies on linguistic and statistical strategies for knowledge acquisition.
The starting point of this research proposal is the development of a rule-based grammar for Spanish and for Catalan following the theoretical basis of Dependency Grammar (Tesnière, 1959; Mel’čuk, 1988) in order to carry out two experiments about the integration of automatically- acquired knowledge. In order to build two robust grammars that understand a sentence, the FreeLing pipeline (Padró et al., 2010) has been used as a framework. On the other hand, an eclectic repertoire of criteria about the nature of syntactic heads is proposed by reviewing the postulates of Generative Grammar (Chomsky, 1981; Bonet and Solà , 1986; Haegeman, 1991) and Dependency Grammar (Tesnière, 1959; Mel’čuk, 1988). Furthermore, a set of dependency relations is provided and mapped to Universal Dependencies (Mcdonald et al., 2013).
Furthermore, an empirical evaluation method has been designed in order to carry out both a quantitative and a qualitative analysis. In particular, the dependency parsed trees generated by the grammars are compared to real linguistic data. The quantitative evaluation is based on the Spanish Tibidabo Treebank (Marimon et al., 2014), which is large enough to carry out a real analysis of the grammars performance and which has been annotated with the same formalism as the grammars, syntactic dependencies. Since the criteria between both resources are differ- ent, a process of harmonization has been applied developing a set of rules that automatically adapt the criteria of the corpus to the grammar criteria. With regard to qualitative evaluation, there are no available resources to evaluate Spanish and Catalan dependency grammars quali- tatively. For this reason, a test suite of syntactic phenomena about structure and word order has been built. In order to create a representative repertoire of the languages observed, descriptive grammars (Bosque and Demonte, 1999; Solà et al., 2002) and the SenSem Corpus (Vázquez and Fernández-Montraveta, 2015) have been used for capturing relevant structures and word order patterns, respectively.
Thanks to these two tools, two experiments have been carried out in order to prove that knowl- edge integration improves the parsing accuracy. On the one hand, the automatic learning of lan- guage models has been explored by means of statistical methods in order to disambiguate PP- attachment. More precisely, a model has been learned with a supervised classifier using Weka (Witten and Frank, 2005). Furthermore, an unsupervised model based on word embeddings has been applied (Mikolov et al., 2013a,b). The results of the experiment show that the supervised method is limited in predicting solutions for unseen data, which is resolved by the unsupervised method since provides a solution for any case. However, the unsupervised method is limited if it
Parsing and Evaluation Improving Dependency Grammars Accuracy
only learns from lexical data. For this reason, training data needs to be enriched with the lexical value of the preposition, as well as semantic and syntactic features. In addition, the number of patterns used to learn language models has to be extended in order to have an impact on the grammars.
On the other hand, another experiment is carried out in order to improve the argument recog- nition in the grammars by the acquisition of linguistic knowledge. In this experiment, knowledge is acquired automatically from the extraction of verb subcategorization frames from the SenSem Corpus (Vázquez and Fernández-Montraveta, 2015) which contains the verb predicate and its arguments annotated syntactically. As a result of the information extracted, subcategorization frames have been classified into subcategorization classes regarding the patterns observed in the corpus. The results of the subcategorization classes integration in the grammars prove that this information increases the accuracy of the argument recognition in the grammars.
The results of the research of this thesis show that grammars’ rules on their own are not ex- pressive enough to resolve complex ambiguities. However, the integration of knowledge about these ambiguities in the grammars may be decisive in the disambiguation. On the one hand, sta- tistical knowledge about PP-attachment can improve the grammars accuracy, but syntactic and semantic information, and new patterns of PP-attachment need to be included in the language models in order to contribute to disambiguate this phenomenon. On the other hand, linguistic knowledge about verb subcategorization acquired from annotated linguistic resources show a positive influence positively on grammars’ accuracy.Aquesta tesi vol tractar les limitacions amb què es troben els analitzadors sintĂ ctics automĂ tics actualment. Tot i els progressos que s’han fet en l’à rea del Processament del Llenguatge Nat- ural en els darrers anys, les tecnologies del llenguatge i, en particular, els analitzadors sintĂ c- tics automĂ tics no han pogut traspassar el llindar de certes ambiguĂŻtats estructurals com ara l’agrupaciĂł del sintagma preposicional i el reconeixement d’arguments. És per aquest motiu que la recerca duta a terme en aquesta tesi tĂ© com a objectiu aportar millores signiflcatives de quali- tat a l’anĂ lisi sintĂ ctica automĂ tica per mitjĂ de la integraciĂł de coneixement lingĂĽĂstic i estadĂstic per desambiguar construccions sintĂ ctiques ambigĂĽes.
El punt de partida de la recerca ha estat el desenvolupament de d’una gramĂ tica en espanyol i una altra en catalĂ basades en regles que segueixen els postulats de la GramĂ tica de Dependèn- dencies (Tesnière, 1959; Mel’čuk, 1988) per tal de dur a terme els experiments sobre l’adquisiciĂł de coneixement automĂ tic. Per tal de crear dues gramĂ tiques robustes que analitzin i entenguin l’oraciĂł en profunditat, ens hem basat en l’arquitectura de FreeLing (PadrĂł et al., 2010), una lli- breria de Processament de Llenguatge Natural que proveeix una anĂ lisi lingĂĽĂstica automĂ tica de l’oraciĂł. Per una altra banda, s’ha elaborat una proposta eclèctica de criteris lingĂĽĂstics per determinar la formaciĂł dels sintagmes i les clĂ usules a la gramĂ tica per mitjĂ de la revisiĂł de les propostes teòriques de la GramĂ tica Generativa (Chomsky, 1981; Bonet and SolĂ , 1986; Haege- man, 1991) i de la GramĂ tica de Dependències (Tesnière, 1959; Mel’čuk, 1988). Aquesta proposta s’acompanya d’un llistat de les etiquetes de relaciĂł de dependència que fan servir les regles de les gramĂ tques. A mĂ©s a mĂ©s de l’elaboraciĂł d’aquest llistat, s’han establert les correspondències amb l’estĂ ndard d’anotaciĂł de les Dependències Universals (Mcdonald et al., 2013).
Alhora, s’ha dissenyat un sistema d’avaluaciĂł empĂric que tĂ© en compte l’anĂ lisi quantitativa i qualitativa per tal de fer una valoraciĂł completa dels resultats dels experiments. Precisament, es tracta una tasca empĂrica pel fet que es comparen les anĂ lisis generades per les gramĂ tiques amb dades reals de la llengua. Per tal de dur a terme l’avaluaciĂł des d’una perspectiva quan- titativa, s’ha fet servir el corpus Tibidabo en espanyol (Marimon et al., 2014) disponible nomĂ©s en espanyol que Ă©s prou extens per construir una anĂ lisi real de les gramĂ tiques i que ha estat anotat amb el mateix formalisme que les gramĂ tiques. En concret, per tal com els criteris de les gramĂ tiques i del corpus no sĂłn coincidents, s’ha dut a terme un procĂ©s d’harmonitzaciĂł de cri- teris per mitjĂ d’unes regles creades manualment que adapten automĂ ticament l’estructura i la relaciĂł de dependència del corpus al criteri de les gramĂ tiques. Pel que fa a l’avaluaciĂł qualitativa, pel fet que no hi ha recursos disponibles en espanyol i catalĂ , hem dissenyat un reprertori de test de fenòmens sintĂ ctics estructurals i relacionats amb l’ordre de l’oraciĂł. Amb l’objectiu de crear un repertori representatiu de les llengĂĽes estudiades, s’han fet servir gramĂ tiques descriptives per fornir el repertori d’estructures sintĂ ctiques (Bosque and Demonte, 1999; SolĂ et al., 2002) i el Corpus SenSem (Vázquez and Fernández-Montraveta, 2015) per capturar automĂ ticament l’ordre oracional.
Grà cies a aquestes dues eines, s’han pogut dur a terme dos experiments per provar que la integració de coneixement en l’anà lisi sintà ctica automà tica en millora la qualitat. D’una banda,
Parsing and Evaluation Improving Dependency Grammars Accuracy
s’ha explorat l’aprenentatge de models de llenguatge per mitjĂ de models estadĂstics per tal de proposar solucions a l’agrupaciĂł del sintagma preposicional. MĂ©s concretament, s’ha desen- volupat un model de llenguatge per mitjĂ d’un classiflcador d’aprenentatge supervisat de Weka (Witten and Frank, 2005). A mĂ©s a mĂ©s, s’ha après un model de llenguatge per mitjĂ d’un mètode no supervisat basat en l’aproximaciĂł distribucional anomenat word embeddings (Mikolov et al., 2013a,b). Els resultats de l’experiment posen de manifest que el mètode supervisat tĂ© greus lim- itacions per fer donar una resposta en dades que no ha vist prèviament, cosa que Ă©s superada pel mètode no supervisat pel fet que Ă©s capaç de classiflcar qualsevol cas. De tota manera, el mètode no supervisat que s’ha estudiat Ă©s limitat si aprèn a partir de dades lèxiques. Per aquesta raĂł, Ă©s necessari que les dades utilitzades per entrenar el model continguin el valor de la preposi- ciĂł, trets sintĂ ctics i semĂ ntics. A mĂ©s a mĂ©s, cal ampliar el nĂşmero de patrons apresos per tal d’ampliar la cobertura dels models i tenir un impacte en els resultats de les gramĂ tiques.
D’una altra banda, s’ha proposat una manera de millorar el reconeixement d’arguments a les gramĂ tiques per mitjĂ de l’adquisiciĂł de coneixement lingĂĽĂstic. En aquest experiment, s’ha op- tat per extreure automĂ ticament el coneixement en forma de classes de subcategoritzaciĂł verbal d’el Corpus SenSem (Vázquez and Fernández-Montraveta, 2015), que contĂ© anotats sintĂ ctica- ment el predicat verbal i els seus arguments. A partir de la informaciĂł extreta, s’ha classiflcat les diverses diĂ tesis verbals en classes de subcategoritzaciĂł verbal en funciĂł dels patrons observats en el corpus. Els resultats de la integraciĂł de les classes de subcategoritzaciĂł a les gramĂ tiques mostren que aquesta informaciĂł determina positivament el reconeixement dels arguments.
Els resultats de la recerca duta a terme en aquesta tesi doctoral posen de manifest que les regles de les gramĂ tiques no sĂłn prou expressives per elles mateixes per resoldre ambigĂĽitats complexes del llenguatge. No obstant això, la integraciĂł de coneixement sobre aquestes am- bigĂĽitats pot ser decisiu a l’hora de proposar una soluciĂł. D’una banda, el coneixement estadĂstic sobre l’agrupaciĂł del sintagma preposicional pot millorar la qualitat de les gramĂ tiques, però per aflrmar-ho cal incloure informaciĂł sintĂ ctica i semĂ ntica en els models d’aprenentatge automĂ tic i capturar mĂ©s patrons per contribuir en la desambiguaciĂł de fenòmens complexos. D’una al- tra banda, el coneixement lingĂĽĂstic sobre subcategoritzaciĂł verbal adquirit de recursos lingĂĽĂs- tics anotats influeix decisivament en la qualitat de les gramĂ tiques per a l’anĂ lisi sintĂ ctica au- tomĂ tica
Maximum Entropy Models For Natural Language Ambiguity Resolution
This thesis demonstrates that several important kinds of natural language ambiguities can be resolved to state-of-the-art accuracies using a single statistical modeling technique based on the principle of maximum entropy.
We discuss the problems of sentence boundary detection, part-of-speech tagging, prepositional phrase attachment, natural language parsing, and text categorization under the maximum entropy framework. In practice, we have found that maximum entropy models offer the following advantages:
State-of-the-art Accuracy: The probability models for all of the tasks discussed perform at or near state-of-the-art accuracies, or outperform competing learning algorithms when trained and tested under similar conditions. Methods which outperform those presented here require much more supervision in the form of additional human involvement or additional supporting resources.
Knowledge-Poor Features: The facts used to model the data, or features, are linguistically very simple, or knowledge-poor but yet succeed in approximating complex linguistic relationships.
Reusable Software Technology: The mathematics of the maximum entropy framework are essentially independent of any particular task, and a single software implementation can be used for all of the probability models in this thesis.
The experiments in this thesis suggest that experimenters can obtain state-of-the-art accuracies on a wide range of natural language tasks, with little task-specific effort, by using maximum entropy probability models
- …