363 research outputs found

    SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications

    Get PDF
    We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities

    Extraction of Keyphrases from Text: Evaluation of Four Algorithms

    Get PDF
    This report presents an empirical evaluation of four algorithms for automatically extracting keywords and keyphrases from documents. The four algorithms are compared using five different collections of documents. For each document, we have a target set of keyphrases, which were generated by hand. The target keyphrases were generated for human readers; they were not tailored for any of the four keyphrase extraction algorithms. Each of the algorithms was evaluated by the degree to which the algorithm’s keyphrases matched the manually generated keyphrases. The four algorithms were (1) the AutoSummarize feature in Microsoft’s Word 97, (2) an algorithm based on Eric Brill’s part-of-speech tagger, (3) the Summarize feature in Verity’s Search 97, and (4) NRC’s Extractor algorithm. For all five document collections, NRC’s Extractor yields the best match with the manually generated keyphrases

    Learning to Extract Keyphrases from Text

    Get PDF
    Many academic journals ask their authors to provide a list of about five to fifteen key words, to appear on the first page of each article. Since these key words are often phrases of two or more words, we prefer to call them keyphrases. There is a surprisingly wide variety of tasks for which keyphrases are useful, as we discuss in this paper. Recent commercial software, such as Microsoft?s Word 97 and Verity?s Search 97, includes algorithms that automatically extract keyphrases from documents. In this paper, we approach the problem of automatically extracting keyphrases from text as a supervised learning task. We treat a document as a set of phrases, which the learning algorithm must learn to classify as positive or negative examples of keyphrases. Our first set of experiments applies the C4.5 decision tree induction algorithm to this learning task. The second set of experiments applies the GenEx algorithm to the task. We developed the GenEx algorithm specifically for this task. The third set of experiments examines the performance of GenEx on the task of metadata generation, relative to the performance of Microsoft?s Word 97. The fourth and final set of experiments investigates the performance of GenEx on the task of highlighting, relative to Verity?s Search 97. The experimental results support the claim that a specialized learning algorithm (GenEx) can generate better keyphrases than a general-purpose learning algorithm (C4.5) and the non-learning algorithms that are used in commercial software (Word 97 and Search 97)

    Construindo grafos de conhecimento utilizando documentos textuais para análise de literatura científica

    Get PDF
    Orientador: Julio Cesar dos ReisDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: O número de publicações científicas que pesquisadores tem que ler vem aumento nos últimos anos. Consequentemente, dentre várias opções, é difícil para eles identificarem documentos relevantes relacionados aos seus estudos. Ademais, para entender como um campo científico é organizado, e para estudar o seu estado da arte, pesquisadores geralmente se baseiam em artigos de revisão de uma área. Estes artigos podem estar indisponíveis ou desatualizados dependendo do tema estudado. Usualmente, pesquisadores têm que realizar esta árdua tarefa de pesquisa fundamental manualmente. Pesquisas recentes vêm desenvolvendo mecanismos para auxiliar outros pesquisadores a entender como campos científicos são estruturados. Entretanto, estes mecanismos são focados exclusivamente em recomendar artigos relevantes para os pesquisadores ou os auxiliar em entender como um ramo da ciência é organizado ao nível de publicação. Desta forma, estes métodos limitam o entendimento sobre o ramo estudado, não permitindo que interessados estudem os conceitos e relações abstratas que compõe um ramo da ciência e as suas subáreas. Esta dissertação de mestrado propõe um framework para estruturar, analisar, e rastrear a evolução de um campo científico no nível dos seus conceitos. Ela primeiramente estrutura o campo científico como um grafo-de-conhecimento utilizando os seus conceitos como vértices. A seguir, ela automaticamente identifica as principais subáreas do campo estudado, extrai as suas frases-chave, e estuda as suas relações. Nosso framework representa o campo científico em diferentes períodos do tempo. Esta dissertação compara estas representações, e identifica como as subáreas do campo estudado evoluiram no decorrer dos anos. Avaliamos cada etapa do nosso framework representando e analisando dados científicos provenientes de diferentes áreas de conhecimento em casos de uso. Nossas descobertas indicam o sucesso em detectar resultados similares em diferentes casos de uso, indicando que nossa abordagem é aplicável à diferentes domínios da ciência. Esta pesquisa também contribui com uma aplicação com interface web para auxiliar pesquisadores a utilizarem nosso framework de forma gráfica. Ao utilizar nossa aplicação, pesquisadores podem ter uma análise geral de como um campo científico é estruturado e como ele evoluiAbstract: The amount of publications a researcher must absorb has been increasing over the last years. Consequently, among so many options, it is hard for them to identify interesting documents to read related to their studies. Researchers usually search for review articles to understand how a scientific field is organized and to study its state of the art. This option can be unavailable or outdated depending on the studied area. Usually, they have to do such laborious task of background research manually. Recent researches have developed mechanisms to assist researchers in understanding the structure of scientific fields. However, those mechanisms focus on recommending relevant articles to researchers or supporting them in understanding how a scientific field is organized considering documents that belong to it. These methods limit the field understanding, not allowing researchers to study the underlying concepts and relations that compose a scientific field and its sub-areas. This Ms.c. thesis proposes a framework to structure, analyze, and track the evolution of a scientific field at a concept level. Given a set of textual documents as research papers, it first structures a scientific field as a knowledge graph using its detected concepts as vertices. Then, it automatically identifies the field's main sub-areas, extracts their keyphrases, and studies their relations. Our framework enables to represent the scientific field in distinct time-periods. It allows to compare its representations and identify how the field's areas changed over time. We evaluate each step of our framework representing and analyzing scientific data from distinct fields of knowledge in case studies. Our findings indicate the success in detecting the sub-areas based on the generated graph from natural language documents. We observe similar outcomes in the different case studies by indicating our approach applicable to distinct domains. This research also contributes with a web-based software tool that allows researchers to use the proposed framework graphically. By using our application, researchers can have an overview analysis of how a scientific field is structured and how it evolvedMestradoCiência da ComputaçãoMestre em Ciência da Computação2013/08293-7 ; 2017/02325-5FAPESPCAPE

    SemEval 2017 Task 10: ScienceIE - Extracting Keyphrases and Relations from Scientific Publications

    Get PDF
    We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding which publications describe which processes, tasks and materials. Although this was a new task, we had a total of 26 submissions across 3 evaluation scenarios. We expect the task and the findings reported in this paper to be relevant for researchers working on understanding scientific content, as well as the broader knowledge base population and information extraction communities

    A Context Centric Model for building a Knowledge advantage Machine Based on Personal Ontology Patterns

    Get PDF
    Throughout the industrial era societal advancement could be attributed in large part to introduction a plethora of electromechanical machines all of which exploited a key concept known as Mechanical Advantage. In the post-industrial era exploitation of knowledge is emerging as the key enabler for societal advancement. With the advent of the Internet and the Web, while there is no dearth of knowledge, what is lacking is an efficient and practical mechanism for organizing knowledge and presenting it in a comprehensible form appropriate for every context. This is the fundamental problem addressed by my dissertation.;We begin by proposing a novel architecture for creating a Knowledge Advantage Machine (KaM), one which enables a knowledge worker to bring to bear a larger amount of knowledge to solve a problem in a shorter time. This is analogous to an electromechanical machine that enables an industrial worker to bring to bear a large amount of power to perform a task thus improving worker productivity. This work is based on the premise that while a universal KaM is beyond the realm of possibility, a KaM specific to a particular type of knowledge worker is realizable because of the limited scope of his/her personal ontology used to organize all relevant knowledge objects.;The proposed architecture is based on a society of intelligent agents which collaboratively discover, markup, and organize relevant knowledge objects into a semantic knowledge network on a continuing basis. This in-turn is exploited by another agent known as the Context Agent which determines the current context of the knowledge worker and makes available in a suitable form the relevant portion of the semantic network. In this dissertation we demonstrate the viability and extensibility of this architecture by building a prototype KaM for one type of knowledge worker such as a professor

    ChatGPT vs State-of-the-Art Models: A Benchmarking Study in Keyphrase Generation Task

    Full text link
    Transformer-based language models, including ChatGPT, have demonstrated exceptional performance in various natural language generation tasks. However, there has been limited research evaluating ChatGPT's keyphrase generation ability, which involves identifying informative phrases that accurately reflect a document's content. This study seeks to address this gap by comparing ChatGPT's keyphrase generation performance with state-of-the-art models, while also testing its potential as a solution for two significant challenges in the field: domain adaptation and keyphrase generation from long documents. We conducted experiments on six publicly available datasets from scientific articles and news domains, analyzing performance on both short and long documents. Our results show that ChatGPT outperforms current state-of-the-art models in all tested datasets and environments, generating high-quality keyphrases that adapt well to diverse domains and document lengths

    Using Machine Learning and Graph Mining Approaches to Improve Software Requirements Quality: An Empirical Investigation

    Get PDF
    Software development is prone to software faults due to the involvement of multiple stakeholders especially during the fuzzy phases (requirements and design). Software inspections are commonly used in industry to detect and fix problems in requirements and design artifacts, thereby mitigating the fault propagation to later phases where the same faults are harder to find and fix. The output of an inspection process is list of faults that are present in software requirements specification document (SRS). The artifact author must manually read through the reviews and differentiate between true-faults and false-positives before fixing the faults. The first goal of this research is to automate the detection of useful vs. non-useful reviews. Next, post-inspection, requirements author has to manually extract key problematic topics from useful reviews that can be mapped to individual requirements in an SRS to identify fault-prone requirements. The second goal of this research is to automate this mapping by employing Key phrase extraction (KPE) algorithms and semantic analysis (SA) approaches to identify fault-prone requirements. During fault-fixations, the author has to manually verify the requirements that could have been impacted by a fix. The third goal of my research is to assist the authors post-inspection to handle change impact analysis (CIA) during fault fixation using NL processing with semantic analysis and mining solutions from graph theory. The selection of quality inspectors during inspections is pertinent to be able to carry out post-inspection tasks accurately. The fourth goal of this research is to identify skilled inspectors using various classification and feature selection approaches. The dissertation has led to the development of automated solution that can identify useful reviews, help identify skilled inspectors, extract most prominent topics/keyphrases from fault logs; and help RE author during the fault-fixation post inspection
    corecore