1,475 research outputs found

    Efficient Multi-Template Learning for Structured Prediction

    Full text link
    Conditional random field (CRF) and Structural Support Vector Machine (Structural SVM) are two state-of-the-art methods for structured prediction which captures the interdependencies among output variables. The success of these methods is attributed to the fact that their discriminative models are able to account for overlapping features on the whole input observations. These features are usually generated by applying a given set of templates on labeled data, but improper templates may lead to degraded performance. To alleviate this issue, in this paper, we propose a novel multiple template learning paradigm to learn structured prediction and the importance of each template simultaneously, so that hundreds of arbitrary templates could be added into the learning model without caution. This paradigm can be formulated as a special multiple kernel learning problem with exponential number of constraints. Then we introduce an efficient cutting plane algorithm to solve this problem in the primal, and its convergence is presented. We also evaluate the proposed learning paradigm on two widely-studied structured prediction tasks, \emph{i.e.} sequence labeling and dependency parsing. Extensive experimental results show that the proposed method outperforms CRFs and Structural SVMs due to exploiting the importance of each template. Our complexity analysis and empirical results also show that our proposed method is more efficient than OnlineMKL on very sparse and high-dimensional data. We further extend this paradigm for structured prediction using generalized pp-block norm regularization with p>1p>1, and experiments show competitive performances when p[1,2)p \in [1,2)

    Ensemble Parsing and its Effect on Machine Translation

    Get PDF
    The focus of much of dependency parsing is on creating new modeling techniques and examining new feature sets for existing dependency models. Often these new models are lucky to achieve equivalent results with the current state of the art results and often perform worse. These approaches are for languages that are often resource-rich and have ample training data available for dependency parsing. For this reason, the accuracy scores are often quite high. This, by its very nature, makes it quite difficult to create a significantly large increase in the current state-of-the-art. Research in this area is often concerned with small accuracy changes or very specific localized changes, such as increasing accuracy of a particular linguistic construction. With so many modeling techniques available to languages with large resources the problem exists on how to exploit the current techniques with the use of combination, or ensemble, techniques along with this plethora of data. Dependency parsers are almost ubiquitously evaluated on their accuracy scores, these scores say nothing of the complexity and usefulness of the resulting structures. The structures may have more complexity due to the depth of their co- ordination or noun phrases. As dependency parses are basic structures in which other systems are built upon, it would seem more reasonable to judge these parsers down the NLP pipeline. The types of parsing errors that cause significant problems in other NLP applications is currently an unknown

    Robust Multilingual Part-of-Speech Tagging via Adversarial Training

    Full text link
    Adversarial training (AT) is a powerful regularization method for neural networks, aiming to achieve robustness to input perturbations. Yet, the specific effects of the robustness obtained from AT are still unclear in the context of natural language processing. In this paper, we propose and analyze a neural POS tagging model that exploits AT. In our experiments on the Penn Treebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages), we find that AT not only improves the overall tagging accuracy, but also 1) prevents over-fitting well in low resource languages and 2) boosts tagging accuracy for rare / unseen words. We also demonstrate that 3) the improved tagging performance by AT contributes to the downstream task of dependency parsing, and that 4) AT helps the model to learn cleaner word representations. 5) The proposed AT model is generally effective in different sequence labeling tasks. These positive results motivate further use of AT for natural language tasks.Comment: NAACL 201

    Viability of Sequence Labeling Encodings for Dependency Parsing

    Get PDF
    Programa Oficial de Doutoramento en Computación . 5009V01[Abstract] This thesis presents new methods for recasting dependency parsing as a sequence labeling task yielding a viable alternative to the traditional transition- and graph-based approaches. It is shown that sequence labeling parsers provide several advantages for dependency parsing, such as: (i) a good trade-off between accuracy and parsing speed, (ii) genericity which enables running a parser in generic sequence labeling software and (iii) pluggability which allows using full parse trees as features to downstream tasks. The backbone of dependency parsing as sequence labeling are the encodings which serve as linearization methods for mapping dependency trees into discrete labels, such that each token in a sentence is associated with a label. We introduce three encoding families comprising: (i) head selection, (ii) bracketing-based and (iii) transition-based encodings which are differentiated by the way they represent a dependency tree as a sequence of labels. We empirically examine the viability of the encodings and provide an analysis of their facets. Furthermore, we explore the feasibility of leveraging external complementary data in order to enhance parsing performance. Our sequence labeling parser is endowed with two kinds of representations. First, we exploit the complementary nature of dependency and constituency parsing paradigms and enrich the parser with representations from both syntactic abstractions. Secondly, we use human language processing data to guide our parser with representations from eye movements. Overall, the results show that recasting dependency parsing as sequence labeling is a viable approach that is fast and accurate and provides a practical alternative for integrating syntax in NLP tasks.[Resumen] Esta tesis presenta nuevos métodos para reformular el análisis sintáctico de dependencias como una tarea de etiquetado secuencial, lo que supone una alternativa viable a los enfoques tradicionales basados en transiciones y grafos. Se demuestra que los analizadores de etiquetado secuencial ofrecen varias ventajas para el análisis sintáctico de dependencias, como por ejemplo (i) un buen equilibrio entre la precisión y la velocidad de análisis, (ii) la genericidad que permite ejecutar un analizador en un software genérico de etiquetado secuencial y (iii) la conectividad que permite utilizar el árbol de análisis completo como características para las tareas posteriores. El pilar del análisis sintáctico de dependencias como etiquetado secuencial son las codificaciones que sirven como métodos de linealización para transformar los árboles de dependencias en etiquetas discretas, de forma que cada token de una frase se asocia con una etiqueta. Introducimos tres familias de codificación que comprenden: (i) selección de núcleos, (ii) codificaciones basadas en corchetes y (iii) codificaciones basadas en transiciones que se diferencian por la forma en que representan un árbol de dependencias como una secuencia de etiquetas. Examinamos empíricamente la viabilidad de las codificaciones y ofrecemos un análisis de sus facetas. Además, exploramos la viabilidad de aprovechar datos complementarios externos para mejorar el rendimiento del análisis sintáctico. Dotamos a nuestro analizador sintáctico de dos tipos de representaciones. En primer lugar, explotamos la naturaleza complementaria de los paradigmas de análisis sintáctico de dependencias y constituyentes, enriqueciendo el analizador sintáctico con representaciones de ambas abstracciones sintácticas. En segundo lugar, utilizamos datos de procesamiento del lenguaje humano para guiar nuestro analizador con representaciones de los movimientos oculares. En general, los resultados muestran que la reformulación del análisis sintáctico de dependencias como etiquetado de secuencias es un enfoque viable, rápido y preciso, y ofrece una alternativa práctica para integrar la sintaxis en las tareas de PLN.[Resumo] Esta tese presenta novos métodos para reformular a análise sintáctica de dependencias como unha tarefa de etiquetaxe secuencial, o que supón unha alternativa viable aos enfoques tradicionais baseados en transicións e grafos. Demóstrase que os analizadores de etiquetaxe secuencial ofrecen varias vantaxes para a análise sintáctica de dependencias, por exemplo (i) un bo equilibrio entre a precisión e a velocidade de análise, (ii) a xenericidade que permite executar un analizador nun software xenérico de etiquetaxe secuencial e (iii) a conectividade que permite empregar a árbore de análise completa como características para as tarefas posteriores. O piar da análise sintáctica de dependencias como etiquetaxe secuencial son as codificacións que serven como métodos de linealización para transformar as árbores de dependencias en etiquetas discretas, de forma que cada token dunha frase se asocia cunha etiqueta. Introducimos tres familias de codificación que comprenden: (i) selección de núcleos, (ii) codificacións baseadas en corchetes e (iii) codificacións baseadas en transicións que se diferencian pola forma en que representan unha árbore de dependencia como unha secuencia de etiquetas. Examinamos empíricamente a viabilidade das codificacións e ofrecemos unha análise das súas facetas. Ademais, exploramos a viabilidade de aproveitar datos complementarios externos para mellorar o rendemento da análise sintáctica. O noso analizador sintáctico de etiquetaxe secuencial está dotado de dous tipos de representacións. En primeiro lugar, explotamos a natureza complementaria dos paradigmas de análise sintáctica de dependencias e constituíntes e enriquecemos o analizador sintáctico con representacións de ambas abstraccións sintácticas. En segundo lugar, empregamos datos de procesamento da linguaxe humana para guiar o noso analizador con representacións dos movementos oculares. En xeral, os resultados mostran que a reformulación da análise sintáctico de dependencias como etiquetaxe de secuencias é un enfoque viable, rápido e preciso, e ofrece unha alternativa práctica para integrar a sintaxe nas tarefas de PLN.This work has been carried out thanks to the funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150)

    Neural approaches to sequence labeling for information extraction

    Get PDF
    Een belangrijk aspect binnen artificiële intelligentie (AI) is het interpreteren van menselijke taal uitgedrukt in tekstuele (geschreven) vorm: natural Language processing (NLP) is belangrijk gezien tekstuele informatie nuttig is voor veel toepassingen. Toch is het verstaan ervan (zogenaamde natural Language understanding, (NLU) een uitdaging, gezien de ongestructureerde vorm van tekst, waarvan de betekenis vaak dubbelzinnig en contextafhankelijk is. In dit proefschrift introduceren we oplossingen voor tekortkomingen van gerelateerd werk bij het behandelen van fundamentele taken in natuurlijke taalverwerking, zoals named entity recognition (i.e. het identificeren van de entiteiten die in een zin voorkomen) en relatie-extractie (het identificeren van relaties tussen entiteiten). Vertrekkend van een specifiek probleem (met name het identificeren van de structuur van een huis aan de hand van een tekstueel zoekertje), bouwen we stapsgewijs een complete (geautomatiseerde) oplossing voor de bovengenoemde taken, op basis van neutrale netwerkarchitecturen. Onze oplossingen zijn algemeen toepasbaar op verschillende toepassingsdomeinen en talen. We beschouwen daarnaast ook de taak van het identificeren van relevante gebeurtenissen tijdens een evenement (bv. een doelpunt tijdens een voetbalwedstrijd), in informatiestromen op Twitter. Meer bepaald formuleren we dit probleem als het labelen van woord sequenties (vergelijkbaar met named entity recognition), waarbij we de chronologische relatie tussen opeenvolgende tweets benutten

    Multilingual word embeddings and their utility in cross-lingual learning

    Get PDF
    Word embeddings - dense vector representations of a word’s distributional semantics - are an indespensable component of contemporary natural language processing (NLP). Bilingual embeddings, in particular, have attracted much attention in recent years, given their inherent applicability to cross-lingual NLP tasks, such as Part-of-speech tagging and dependency parsing. However, despite recent advancements in bilingual embedding mapping, very little research has been dedicated to aligning embeddings multilingually, where word embeddings for a variable amount of languages are oriented to a single vector space. Given a proper alignment, one potential use case for multilingual embeddings is cross-lingual transfer learning, where a machine learning model trained on resource-rich languages (e.g. Finnish and Estonian) can “transfer” its salient features to a related language for which annotated resources are scarce (e.g. North Sami). The effect of the quality of this alignment on downstream cross-lingual NLP tasks has also been left largely unexplored, however. With this in mind, our work is motivated by two goals. First, we aim to leverage existing supervised and unsupervised methods in bilingual embedding mapping towards inducing high quality multilingual embeddings. To this end, we propose three algorithms (one supervised, two unsupervised) and evaluate them against a completely supervised bilingual system and a commonly employed baseline approach. Second, we investigate the utility of multilingual embeddings in two common cross-lingual transfer learning scenarios: POS-tagging and dependency parsing. To do so, we train a joint POS-tagger/dependency parser on Universal Dependencies treebanks for a variety of Indo-European languages and evaluate it on other, closely related languages. Although we ultimately observe that, in most settings, multilingual word embeddings themselves do not induce a cross-lingual signal, our experimental framework and results offer many insights for future cross-lingual learning experiments
    corecore