298 research outputs found

    A Diachronic Analysis of Paradigm Shifts in NLP Research: When, How, and Why?

    Full text link
    Understanding the fundamental concepts and trends in a scientific field is crucial for keeping abreast of its continuous advancement. In this study, we propose a systematic framework for analyzing the evolution of research topics in a scientific field using causal discovery and inference techniques. We define three variables to encompass diverse facets of the evolution of research topics within NLP and utilize a causal discovery algorithm to unveil the causal connections among these variables using observational data. Subsequently, we leverage this structure to measure the intensity of these relationships. By conducting extensive experiments on the ACL Anthology corpus, we demonstrate that our framework effectively uncovers evolutionary trends and the underlying causes for a wide range of NLP research topics. Specifically, we show that tasks and methods are primary drivers of research in NLP, with datasets following, while metrics have minimal impact.Comment: accepted at EMNLP 202

    Improving Syntactic Parsing of Clinical Text Using Domain Knowledge

    Get PDF
    Syntactic parsing is one of the fundamental tasks of Natural Language Processing (NLP). However, few studies have explored syntactic parsing in the medical domain. This dissertation systematically investigated different methods to improve the performance of syntactic parsing of clinical text, including (1) Constructing two clinical treebanks of discharge summaries and progress notes by developing annotation guidelines that handle missing elements in clinical sentences; (2) Retraining four state-of-the-art parsers, including the Stanford parser, Berkeley parser, Charniak parser, and Bikel parser, using clinical treebanks, and comparing their performance to identify better parsing approaches; and (3) Developing new methods to reduce syntactic ambiguity caused by Prepositional Phrase (PP) attachment and coordination using semantic information. Our evaluation showed that clinical treebanks greatly improved the performance of existing parsers. The Berkeley parser achieved the best F-1 score of 86.39% on the MiPACQ treebank. For PP attachment, our proposed methods improved the accuracies of PP attachment by 2.35% on the MiPACQ corpus and 1.77% on the I2b2 corpus. For coordination, our method achieved a precision of 94.9% and a precision of 90.3% for the MiPACQ and i2b2 corpus, respectively. To further demonstrate the effectiveness of the improved parsing approaches, we applied outputs of our parsers to two external NLP tasks: semantic role labeling and temporal relation extraction. The experimental results showed that performance of both tasks’ was improved by using the parse tree information from our optimized parsers, with an improvement of 3.26% in F-measure for semantic role labelling and an improvement of 1.5% in F-measure for temporal relation extraction

    Leveraging structure for learning representations of words, sentences and knowledge bases

    Get PDF
    This thesis presents work on learning representations of text and Knowledge Bases by taking into consideration their respective structures. The tasks for which the methods are developed and evaluated on are: Short-text classification, Word Sense Induction and Disambiguation, Knowledge Base Completion with linked text corpora, and large-scale Knowledge Base Question Answering. An introductory chapter states the aims and scope of the thesis, followed by a chapter on technical background and definitions. In chapter 3, the impact of dependency syntax on word representation learning in the context of short-text classification is investigated. A new definition of context in dependency graphs is proposed, which generalizes and extends previous definitions used in word representation learning. The resulting word and dependency feature embeddings are used together to represent dependency graph substructures in text classifiers. In chapter 4, a probabilistic latent variable model for Word Sense Induction and Disambiguation is presented. The model estimates sense clusters using pretrained continuous feature vectors of multiple context types: syntactic, local lexical and global lexical, while the number of sense clusters is determined by the Integrated Complete Likelihood criterion. A model for Knowledge Base Completion with linked text corpora is presented in chapter 5. The proposed model represents potential facts by merging subgraphs of the knowledge base with text through linked entities. The model learns to embed the merged graphs into a lower dimensional space and score the plausibility of the fact with a Multilayer Perceptron. Chapter 6 presents a system for Question Answering on Knowledge Bases. The system learns to decompose questions into entity and relation mentions and score their compatibility with queries on the knowledge base expressed as subgraphs. The model consists of several components trained jointly in order to match parts of the question with parts of a potential query by embedding their corresponding structures in lower dimensional spaces

    Image summarisation: human action description from static images

    Get PDF
    Dissertação de Mestrado, Processamento de Linguagem Natural e Indústrias da Língua, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2014The object of this master thesis is Image Summarisation and more specifically the automatic human action description from static images. The work has been organised into three main phases, with first one being the data collection, second the actual system implementation and third the system evaluation. The dataset consists of 1287 images depicting human activities belonging in fours semantic categories; "walking a dog", "riding a bike", "riding a horse" and "playing the guitar". The images were manually annotated with an approach based in the idea of crowd sourcing, and the annotation of each sentence is in the form of one or two simple sentences. The system is composed by two parts, a Content-based Image Retrieval part and a Natural Language Processing part. Given a query image the first part retrieves a set of images perceived as visually similar and the second part processes the annotations following each of the images in order to extract common information by using a graph merging technique of the dependency graphs of the annotated sentences. An optimal path consisting of a subject-verb-complement relation is extracted and transformed into a proper sentence by applying a set of surface processing rules. The evaluation of the system was carried out in three different ways. Firstly, the Content-based Image Retrieval sub-system was evaluated in terms of precision and recall and compared to a baseline classification system based on randomness. In order to evaluate the Natural Language Processing sub-system, the Image Summarisation task was considered as a machine translation task, and therefore it was evaluated in terms of BLEU score. Given images that correspond to the same semantic as a query image the system output was compared to the corresponding reference summary as provided during the annotation phase, in terms of BLEU score. Finally, the whole system has been qualitatively evaluated by means of a questionnaire. The conclusions reached by the evaluation is that even if the system does not always capture the right human action and subjects and objects involved in it, it produces understandable and efficient in terms of language summaries.O objetivo desta dissertação é sumarização imagem e, mais especificamente, a geração automática de descrições de ações humanas a partir de imagens estáticas. O trabalho foi organizado em três fases principais: a coleta de dados, a implementação do sistema e, finalmente, a sua avaliação. O conjunto de dados é composto por 1.287 imagens que descrevem atividades humanas pertencentes a quatro categorias semânticas: "passear o cão", "andar de bicicleta", "andar a cavalo" e "tocar guitarra". As imagens foram anotadas manualmente com uma abordagem baseada na ideia de 'crowd-sourcing' e a anotação de cada frase foi feita sob a forma de uma ou duas frases simples. O sistema é composto por duas partes: uma parte consiste na recuperação de imagens baseada em conteúdo e a outra parte, que envolve Processamento de Língua Natural. Dada uma imagem para procura, a primeira parte recupera um conjunto de imagens percebidas como visualmente semelhantes e a segunda parte processa as anotações associadas a cada uma dessas imagens, a fim de extrair informações comuns, usando uma técnica de fusão de grafos a partir dos grafos de dependência das frases anotadas. Um caminho ideal consistindo numa relação sujeito-verbo-complemento é então extraído desses grafos e transformado numa frase apropriada, pela aplicação de um conjunto de regras de processamento de superfície. A avaliação do sistema foi realizado de três maneiras diferentes. Em primeiro lugar, o subsistema de recuperação de imagens baseado em conteúdo foi avaliado em termos de precisão e abrangência (recall) e comparado com um limiar de referência (baseline) definido com base num resultado aleatório. A fim de avaliar o subsistema de Processamento de Linguagem Natural, a tarefa de sumarização imagem foi considerada como uma tarefa de tradução automática e foi, portanto, avaliada com base na medida BLEU. Dadas as imagens que correspondem ao mesmo significado da imagem de consulta, a saída do sistema foi comparada com o resumo de referência correspondente, fornecido durante a fase de anotação, utilizando a medida BLEU. Por fim, todo o sistema foi avaliado qualitativamente por meio de um questionário. Em conclusão, verificou-se que o sistema, apesar de nem sempre capturar corretamente a ação humana e os sujeitos ou objetos envolvidos, produz, no entanto, descrições compreensíveis e e linguisticamente adequadas.Erasmus Mundu

    On the Promotion of the Social Web Intelligence

    Get PDF
    Given the ever-growing information generated through various online social outlets, analytical research on social media has intensified in the past few years from all walks of life. In particular, works on social Web intelligence foster and benefit from the wisdom of the crowds and attempt to derive actionable information from such data. In the form of collective intelligence, crowds gather together and contribute to solving problems that may be difficult or impossible to solve by individuals and single computers. In addition, the consumer insight revealed from social footprints can be leveraged to build powerful business intelligence tools, enabling efficient and effective decision-making processes. This dissertation is broadly concerned with the intelligence that can emerge from the social Web platforms. In particular, the two phenomena of social privacy and online persuasion are identified as the two pillars of the social Web intelligence, studying which is essential in the promotion and advancement of both collective and business intelligence. The first part of the dissertation is focused on the phenomenon of social privacy. This work is mainly motivated by the privacy dichotomy problem. Users often face difficulties specifying privacy policies that are consistent with their actual privacy concerns and attitudes. As such, before making use of social data, it is imperative to employ multiple safeguards beyond the current privacy settings of users. As a possible solution, we utilize user social footprints to detect their privacy preferences automatically. An unsupervised collaborative filtering approach is proposed to characterize the attributes of publicly available accounts that are intended to be private. Unlike the majority of earlier studies, a variety of social data types is taken into account, including the social context, the published content, as well as the profile attributes of users. Our approach can provide support in making an informed decision whether to exploit one\u27s publicly available data to draw intelligence. With the aim of gaining insight into the strategies behind online persuasion, the second part of the dissertation studies written comments in online deliberations. Specifically, we explore different dimensions of the language, the temporal aspects of the communication, as well as the attributes of the participating users to understand what makes people change their beliefs. In addition, we investigate the factors that are perceived to be the reasons behind persuasion by the users. We link our findings to traditional persuasion research, hoping to uncover when and how they apply to online persuasion. A set of rhetorical relations is known to be of importance in persuasive discourse. We further study the automatic identification and disambiguation of such rhetorical relations, aiming to take a step closer towards automatic analysis of online persuasion. Finally, a small proof of concept tool is presented, showing the value of our persuasion and rhetoric studies

    Realising context-oriented information filtering.

    Get PDF
    The notion of information overload is an increasing factor in modern information service environments where information is ‘pushed’ to the user. As increasing volumes of information are presented to computing users in the form of email, web sites, instant messaging and news feeds, there is a growing need to filter and prioritise the importance of this information. ‘Information management’ needs to be undertaken in a manner that not only prioritises what information we do need, but to also dispose of information that is sent, which is of no (or little) use to us.The development of a model to aid information filtering in a context-aware way is developed as an objective for this thesis. A key concern in the conceptualisation of a single concept is understanding the context under which that concept exists (or can exist). An example of a concept is a concrete object, for instance a book. This contextual understanding should provide us with clear conceptual identification of a concept including implicit situational information and detail of surrounding concepts.Existing solutions to filtering information suffer from their own unique flaws: textbased filtering suffers from problems of inaccuracy; ontology-based solutions suffer from scalability challenges; taxonomies suffer from problems with collaboration. A major objective of this thesis is to explore the use of an evolving community maintained knowledge-base (that of Wikipedia) in order to populate the context model from prioritise concepts that are semantically relevant to the user’s interest space. Wikipedia can be classified as a weak knowledge-base due to its simple TBox schema and implicit predicates, therefore, part of this objective is to validate the claim that a weak knowledge-base is fit for this purpose. The proposed and developed solution, therefore, provides the benefits of high recall filtering with low fallout and a dependancy on a scalable and collaborative knowledge-base.A simple web feed aggregator has been built using the Java programming language that we call DAVe’s Rss Organisation System (DAVROS-2) as a testbed environment to demonstrate specific tests used within this investigation. The motivation behind the experiments is to demonstrate that the combination of the concept framework instantiated through Wikipedia can provide a framework to aid in concept comparison, and therefore be used in news filtering scenario as an example of information overload. In order to evaluate the effectiveness of the method well understood measures of information retrieval are used. This thesis demonstrates that the utilisation of the developed contextual concept expansion framework (instantiated using Wikipedia) improved the quality of concept filtering over a baseline based on string matching. This has been demonstrated through the analysis of recall and fallout measures
    corecore