806 research outputs found

    Identification of Informativeness in Text using Natural Language Stylometry

    Get PDF
    In this age of information overload, one experiences a rapidly growing over-abundance of written text. To assist with handling this bounty, this plethora of texts is now widely used to develop and optimize statistical natural language processing (NLP) systems. Surprisingly, the use of more fragments of text to train these statistical NLP systems may not necessarily lead to improved performance. We hypothesize that those fragments that help the most with training are those that contain the desired information. Therefore, determining informativeness in text has become a central issue in our view of NLP. Recent developments in this field have spawned a number of solutions to identify informativeness in text. Nevertheless, a shortfall of most of these solutions is their dependency on the genre and domain of the text. In addition, most of them are not efficient regardless of the natural language processing problem areas. Therefore, we attempt to provide a more general solution to this NLP problem. This thesis takes a different approach to this problem by considering the underlying theme of a linguistic theory known as the Code Quantity Principle. This theory suggests that humans codify information in text so that readers can retrieve this information more efficiently. During the codification process, humans usually change elements of their writing ranging from characters to sentences. Examples of such elements are the use of simple words, complex words, function words, content words, syllables, and so on. This theory suggests that these elements have reasonable discriminating strength and can play a key role in distinguishing informativeness in natural language text. In another vein, Stylometry is a modern method to analyze literary style and deals largely with the aforementioned elements of writing. With this as background, we model text using a set of stylometric attributes to characterize variations in writing style present in it. We explore their effectiveness to determine informativeness in text. To the best of our knowledge, this is the first use of stylometric attributes to determine informativeness in statistical NLP. In doing so, we use texts of different genres, viz., scientific papers, technical reports, emails and newspaper articles, that are selected from assorted domains like agriculture, physics, and biomedical science. The variety of NLP systems that have benefitted from incorporating these stylometric attributes somewhere in their computational realm dealing with this set of multifarious texts suggests that these attributes can be regarded as an effective solution to identify informativeness in text. In addition to the variety of text genres and domains, the potential of stylometric attributes is also explored in some NLP application areas---including biomedical relation mining, automatic keyphrase indexing, spam classification, and text summarization---where performance improvement is both important and challenging. The success of the attributes in all these areas further highlights their usefulness

    Techniques for text classification: Literature review and current trends

    Get PDF
    Automated classification of text into predefined categories has always been considered as a vital method to manage and process a vast amount of documents in digital forms that are widespread and continuously increasing. This kind of web information, popularly known as the digital/electronic information is in the form of documents, conference material, publications, journals, editorials, web pages, e-mail etc. People largely access information from these online sources rather than being limited to archaic paper sources like books, magazines, newspapers etc. But the main problem is that this enormous information lacks organization which makes it difficult to manage. Text classification is recognized as one of the key techniques used for organizing such kind of digital data. In this paper we have studied the existing work in the area of text classification which will allow us to have a fair evaluation of the progress made in this field till date. We have investigated the papers to the best of our knowledge and have tried to summarize all existing information in a comprehensive and succinct manner. The studies have been summarized in a tabular form according to the publication year considering numerous key perspectives. The main emphasis is laid on various steps involved in text classification process viz. document representation methods, feature selection methods, data mining methods and the evaluation technique used by each study to carry out the results on a particular dataset

    Biomedical semantic question and answering system

    Get PDF
    Tese de mestrado, Informática, Universidade de Lisboa, Faculdade de Ciências, 2017Os sistemas de Question Answering são excelentes ferramentas para a obtenção de respostas simples e em vários formatos de uma maneira tamb´em simples, sendo de grande utilidade na área de Information Retrieval, para responder a perguntas da comunidade online, e também para fins investigativos ou de prospeção de informação. A área da saúde tem beneficiado muito com estes avanços, auxiliados com o progresso da tecnologia e de ferramentas delas provenientes, que podem ser usadas nesta área, resultando na constante informatização destas áreas. Estes sistemas têm um grande potencial, uma vez que eles acedem a grandes conjuntos de dados estruturados e não estruturados, como por exemplo, a Web ou a grandes repositórios de informação provenientes de lá, de forma a obter as suas respostas, e no caso da comunidade de perguntas e respostas, fóruns online de perguntas e respostas em threads por temática. Os dados não estruturados fornecem um maior desafio, apesar dos dados estruturados de certa maneira limitar o leque de opções transformativas sobre os mesmos. A mesma disponibilização de tais conjuntos de dados de forma pública em formato digital oferecem uma maior liberdade para o público, e mais especificamente os investigadores das áreas específicas envolvidas com estes dados, permitindo uma fácil partilha das mesmas entre os vários interessados. De um modo geral, tais sistemas não estão disponíveis para reutilização pública, porque estão limitados ao campo da investigação, para provar conceitos de algoritmos específicos, são de difícil reutilização por parte de um público mais alargado, ou são ainda de difícil manutenção, pois rapidamente podem ficar desatualizados, principalmente nas tecnologias usadas, que podem deixar de ter suporte. O objetivo desta tese é desenvolver um sistema que colmate algumas destas falhas, promovendo a modularidade entre os módulos, o equilíbrio entre a implementação e a facilidade de utilização, desempenho dos sub-módulos, com o mínimo de pré-requisitos possíveis, tendo como resultado final um sistema de QA base adapaptado para um domínio de conhecimento. Tal sistema será constituído por subsistemas provados individualmente. Nesta tese, são descritobos vários tipos de sistemas, como os de prospecção de informação e os baseados em conhecimento, com enfoque em dois sistemas específicos desta área, o YodaQA e o OAQA. São apresentadas também várias ferramentas úteis e que são recorridas em vários destes sistemas que recorrem a técnicas de Text Classification, que vão desde o processamento de linguagem natural, ao Tokenizatioin, ao Part-of-speech tagging, como a exploração de técnicas de aprendizagem automática (Machine Learning) recorrendo a algoritmos supervisionados e não supervisionados, a semelhança textual (Pattern Matching) e semelhança semântica (Semantic Similarity). De uma forma geral, a partir destas técnicas é possível através de trechos de texto fornecidos, obter informação adicional acerca desses mesmos trechos. São ainda abordadas várias ferramentas que utilizam as técnicas descritas, como algumas de anotação, outras de semelhança semântica e ainda outras num contexto de organização, ordenação e pesquisa de grandes quantidades de informação de forma escaláveis que são úteis e utilizadas neste tipo de aplicações. Alguns dos principais conjuntos de dados são também descritos e abordados. A framework desenvolvida resultou em dois sistemas com uma arquitetura modular em pipeline, composta por módulos distintos consoante a tarefa desenvolvida. Estes módulos tinham bem definido os seus parâmetros de entrada como o que devolviam. O primeiro sistema tinha como entrada um conjunto de threads de perguntas e respostas em comentário e devolvia cada conjunto de dez comentários a uma pergunta ordenada e com um valor que condizia com a utilidade desse comentário para com a resposta. Este sistema denominou-se por MoRS e foi a prova de conceito modular do sistema final a desenvolver. O segundo sistema tem como entrada variadas perguntas da área da biomédica restrita a quatro tipos de pergunta, devolvendo as respectivas respostas, acompanhadas de metadata utilizada na análise dessa pergunta. Foram feitas algumas variações deste sistema, por forma a poder aferir se as escolhas de desenvolvimento iam sendo correctas, utilizando sempre a mesma framework (MoQA) e culminando com o sistema denominado MoQABio. Os principais módulos que compõem estes sistemas incluem, por ordem de uso, um módulo para o reconhecimento de entidades (também biomédicas), utilizando uma das ferramentas já investigadas no capítulo do trabalho relacionado. Também um módulo denominado de Combiner, em que a cada documento recolhido a partir do resultado do módulo anterior, são atribuídos os resultados de várias métricas, que servirão para treinar, no módulo seguinte, a partir da aplicação de algoritmos de aprendizagem automática de forma a gerar um modelo de reconhecimento baseado nestes casos. Após o treino deste modelo, será possível utilizar um classificador de bons e maus artigos. Os modelos foram gerados na sua maioria a partir de Support Vector Machine, havendo também a opção de utilização de Multi-layer Perceptron. Desta feita, dos artigos aprovados são retirados metadata, por forma a construir todo o resto da resposta, que incluia os conceitos, referencia dos documentos, e principais frases desses documentos. No módulo do sistema final do Combiner, existem avaliações que vão desde o já referido Pattern Matching, com medidas como o número de entidades em comum entre a questão e o artigo, de Semantic Similarity usando métricas providenciadas pelos autores da biblioteca Sematch, incluindo semelhança entre conceitos e entidades do DBpedia e outras medidas de semelhança semântica padrão, como Resnik ou Wu-Palmer. Outras métricas incluem o comprimento do artigo, uma métrica de semelhança entre duas frases e o tempo em milisegundos desse artigo. Apesar de terem sido desenvolvidos dois sistemas, as variações desenvolvidas a partir do MoQA, é que têm como pré-requisitos conjuntos de dados provenientes de várias fontes, entre elas o ficheiro de treino e teste de perguntas, o repositório PubMed, que tem inúmeros artigos científicos na área da biomédica, dos quais se vai retirar toda a informação utilizada para as respostas. Além destas fontes locais, existe o OPENphacts, que é externa, que fornecerá informação sobre várias expressões da área biomédica detectadas no primeiro módulo. No fim dos sistemas cujo ancestral foi o MoQA estarem prontos, é possível os utilizadores interagirem com este sistema através de uma aplicação web, a partir da qual, ao inserirem o tipo de resposta que pretendem e a pergunta que querem ver respondida, essa pergunta é passada pelo sistema e devolvida à aplicação web a resposta, e respectiva metadata. Ao investigar a metadata, é possível aceder à informação original. O WS4A participou no BioASQ de 2016, desenvolvida pela equipa ULisboa, o MoRS participou do SemEval Task 3 de 2017 e foi desenvolvida pelo pr´oprio, e por fim oMoQA da mesma autoria do segundo e cujo desempenho foi avaliado consoante os mesmos dados e métricas do WS4A. Enquanto que no caso do BioASQ, era abordado o desempenho de um sistema de Question Answering na àrea da biomédica, no SemEval era abordado um sistema de ordenação de comentários para com uma determinada pergunta, sendo os sistemas submetidos avaliados oficialmente usando as medidas como precision, recall e F-measure. De forma a comparar o impacto das características e ferramentas usadas em cada um dos modelos de aprendizagem automática construídos, estes foram comparados entre si, assim como a melhoria percentual entre os sistemas desenvolvidos ao longo do tempo. Além das avaliações oficiais, houve também avaliações locais que permitiram explorar ainda mais a progressão dos sistemas ao longo do tempo, incluindo os três sistemas desenvolvidos a partir do MoQA. Este trabalho apresenta um sistema que apesar de usar técnicas state of the art com algumas adaptações, conseguiu atingir uma melhoria desempenho relevante face ao seu predecessor e resultados equiparados aos melhores do ano da competição cujos dados utilizou, possuindo assim um grande potencial para atingir melhores resultados. Alguns dos seus contributos já vêm desde Fevereiro de 2016, com o WS4A [86], que participou no BioASQ 2016, com o passo seguinte no MoRS [85], que por sua vez participou no SemEval 2017, findando pelo MoQA, com grandes melhorias e disponível ao público em https://github.com/lasigeBioTM/MoQA. Como trabalho futuro, propõem-se sugestões, começando por melhorar a robustez do sistema, exploração adicional da metadata para melhor direcionar a pesquisa de respostas, a adição e exploração de novas características do modelo a desenvolver e a constante renovação de ferramentas utilizadas Também a incorporação de novas métricas fornecidas pelo Sematch, o melhoramento da formulação de queries feitas ao sistema são medidas a ter em atenção, dado que é preciso pesar o desempenho e o tempo de resposta a uma pergunta.Question Answering systems have been of great use and interest in our times. They are great tools for acquiring simple answers in a simple way, being of great utility in the area of information retrieval, and also for community question answering. Such systems have great potential, since they access large sets of data, for example from the Web, to acquire their answers, and in the case of community question answering, forums. Such systems are not available for public reuse because they are only limited for researching purposes or even proof-of-concept systems of specific algorithms, with researchers repeating over and over again the same r very similar modules frequently, thus not providing a larger public with a tool which could serve their purposes. When such systems are made available, are of cumbersome installation or configuration, which includes reading the documentation and depending on the researchers’ programming ability. In this thesis, the two best available systems in these situations, YodaQA and OAQA are described. A description of the main modules is given, with some sub-problems and hypothetical solutions, also described. Many systems, algorithms (i.e. learning, ranking) were also described. This work presents a modular system, MoQA (which is available at https:// github.com/lasigeBioTM/MoQA), that solves some of these problems by creating a framework that comes with a baseline QA system for general purpose local inquiry, but which is a highly modular system, built with individually proven subsystems, and using known tools such as Sematch, It is a descendant of WS4A [86] and MoRS [85], which took part in BioASQ 2016 (with recognition) and SemEval 2017 repectively. Machine Learning algorithms and Stanford Named Entity Recognition. Its purpose is to have a performance as high as possible while keeping the prerequisites, edition, and the ability to change such modules to the users’ wishes and researching purposes while providing an easy platform through which the final user may use such framework. MoQA had three variants, which were compared with each other, with MoQABio, with the best results among them, by using different tools than the other systems, focusing on the biomedical domain knowledge

    Structuring the Unstructured: Unlocking pharmacokinetic data from journals with Natural Language Processing

    Get PDF
    The development of a new drug is an increasingly expensive and inefficient process. Many drug candidates are discarded due to pharmacokinetic (PK) complications detected at clinical phases. It is critical to accurately estimate the PK parameters of new drugs before being tested in humans since they will determine their efficacy and safety outcomes. Preclinical predictions of PK parameters are largely based on prior knowledge from other compounds, but much of this potentially valuable data is currently locked in the format of scientific papers. With an ever-increasing amount of scientific literature, automated systems are essential to exploit this resource efficiently. Developing text mining systems that can structure PK literature is critical to improving the drug development pipeline. This thesis studied the development and application of text mining resources to accelerate the curation of PK databases. Specifically, the development of novel corpora and suitable natural language processing architectures in the PK domain were addressed. The work presented focused on machine learning approaches that can model the high diversity of PK studies, parameter mentions, numerical measurements, units, and contextual information reported across the literature. Additionally, architectures and training approaches that could efficiently deal with the scarcity of annotated examples were explored. The chapters of this thesis tackle the development of suitable models and corpora to (1) retrieve PK documents, (2) recognise PK parameter mentions, (3) link PK entities to a knowledge base and (4) extract relations between parameter mentions, estimated measurements, units and other contextual information. Finally, the last chapter of this thesis studied the feasibility of the whole extraction pipeline to accelerate tasks in drug development research. The results from this thesis exhibited the potential of text mining approaches to automatically generate PK databases that can aid researchers in the field and ultimately accelerate the drug development pipeline. Additionally, the thesis presented contributions to biomedical natural language processing by developing suitable architectures and corpora for multiple tasks, tackling novel entities and relations within the PK domain

    Entity-Oriented Search

    Get PDF
    This open access book covers all facets of entity-oriented search—where “search” can be interpreted in the broadest sense of information access—from a unified point of view, and provides a coherent and comprehensive overview of the state of the art. It represents the first synthesis of research in this broad and rapidly developing area. Selected topics are discussed in-depth, the goal being to establish fundamental techniques and methods as a basis for future research and development. Additional topics are treated at a survey level only, containing numerous pointers to the relevant literature. A roadmap for future research, based on open issues and challenges identified along the way, rounds out the book. The book is divided into three main parts, sandwiched between introductory and concluding chapters. The first two chapters introduce readers to the basic concepts, provide an overview of entity-oriented search tasks, and present the various types and sources of data that will be used throughout the book. Part I deals with the core task of entity ranking: given a textual query, possibly enriched with additional elements or structural hints, return a ranked list of entities. This core task is examined in a number of different variants, using both structured and unstructured data collections, and numerous query formulations. In turn, Part II is devoted to the role of entities in bridging unstructured and structured data. Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)—a process known as semantic search. The final chapter concludes the book by discussing the limitations of current approaches, and suggesting directions for future research. Researchers and graduate students are the primary target audience of this book. A general background in information retrieval is sufficient to follow the material, including an understanding of basic probability and statistics concepts as well as a basic knowledge of machine learning concepts and supervised learning algorithms

    Document analysis by means of data mining techniques

    Get PDF
    The huge amount of textual data produced everyday by scientists, journalists and Web users, allows investigating many different aspects of information stored in the published documents. Data mining and information retrieval techniques are exploited to manage and extract information from huge amount of unstructured textual data. Text mining also known as text data mining is the processing of extracting high quality information (focusing relevance, novelty and interestingness) from text by identifying patterns etc. Text mining typically involves the process of structuring input text by means of parsing and other linguistic features or sometimes by removing extra data and then finding patterns from structured data. Patterns are then evaluated at last and interpretation of output is performed to accomplish the desired task. Recently, text mining has got attention in several fields such as in security (involves analysis of Internet news), for commercial (for search and indexing purposes) and in academic departments (such as answering query). Beyond searching the documents consisting the words given in a user query, text mining may provide direct answer to user by semantic web for content based (content meaning and its context). It can also act as intelligence analyst and can also be used in some email spam filters for filtering out unwanted material. Text mining usually includes tasks such as clustering, categorization, sentiment analysis, entity recognition, entity relation modeling and document summarization. In particular, summarization approaches are suitable for identifying relevant sentences that describe the main concepts presented in a document dataset. Furthermore, the knowledge existed in the most informative sentences can be employed to improve the understanding of user and/or community interests. Different approaches have been proposed to extract summaries from unstructured text documents. Some of them are based on the statistical analysis of linguistic features by means of supervised machine learning or data mining methods, such as Hidden Markov models, neural networks and Naive Bayes methods. An appealing research field is the extraction of summaries tailored to the major user interests. In this context, the problem of extracting useful information according to domain knowledge related to the user interests is a challenging task. The main topics have been to study and design of novel data representations and data mining algorithms useful for managing and extracting knowledge from unstructured documents. This thesis describes an effort to investigate the application of data mining approaches, firmly established in the subject of transactional data (e.g., frequent itemset mining), to textual documents. Frequent itemset mining is a widely exploratory technique to discover hidden correlations that frequently occur in the source data. Although its application to transactional data is well-established, the usage of frequent itemsets in textual document summarization has never been investigated so far. A work is carried on exploiting frequent itemsets for the purpose of multi-document summarization so a novel multi-document summarizer, namely ItemSum (Itemset-based Summarizer) is presented, that is based on an itemset-based model, i.e., a framework comprise of frequent itemsets, taken out from the document collection. Highly representative and not redundant sentences are selected for generating summary by considering both sentence coverage, with respect to a sentence relevance score, based on tf-idf statistics, and a concise and highly informative itemset-based model. To evaluate the ItemSum performance a suite of experiments on a collection of news articles has been performed. Obtained results show that ItemSum significantly outperforms mostly used previous summarizers in terms of precision, recall, and F-measure. We also validated our approach against a large number of approaches on the DUC’04 document collection. Performance comparisons, in terms of precision, recall, and F-measure, have been performed by means of the ROUGE toolkit. In most cases, ItemSum significantly outperforms the considered competitors. Furthermore, the impact of both the main algorithm parameters and the adopted model coverage strategy on the summarization performance are investigated as well. In some cases, the soundness and readability of the generated summaries are unsatisfactory, because the summaries do not cover in an effective way all the semantically relevant data facets. A step beyond towards the generation of more accurate summaries has been made by semantics-based summarizers. Such approaches combine the use of general-purpose summarization strategies with ad-hoc linguistic analysis. The key idea is to also consider the semantics behind the document content to overcome the limitations of general-purpose strategies in differentiating between sentences based on their actual meaning and context. Most of the previously proposed approaches perform the semantics-based analysis as a preprocessing step that precedes the main summarization process. Therefore, the generated summaries could not entirely reflect the actual meaning and context of the key document sentences. In contrast, we aim at tightly integrating the ontology-based document analysis into the summarization process in order to take the semantic meaning of the document content into account during the sentence evaluation and selection processes. With this in mind, we propose a new multi-document summarizer, namely Yago-based Summarizer, that integrates an established ontology-based entity recognition and disambiguation step. Named Entity Recognition from Yago ontology is being used for the task of text summarization. The Named Entity Recognition (NER) task is concerned with marking occurrences of a specific object being mentioned. These mentions are then classified into a set of predefined categories. Standard categories include “person”, “location”, “geo-political organization”, “facility”, “organization”, and “time”. The use of NER in text summarization improved the summarization process by increasing the rank of informative sentences. To demonstrate the effectiveness of the proposed approach, we compared its performance on the DUC’04 benchmark document collections with that of a large number of state-of-the-art summarizers. Furthermore, we also performed a qualitative evaluation of the soundness and readability of the generated summaries and a comparison with the results that were produced by the most effective summarizers. A parallel effort has been devoted to integrating semantics-based models and the knowledge acquired from social networks into a document summarization model named as SociONewSum. The effort addresses the sentence-based generic multi-document summarization problem, which can be formulated as follows: given a collection of news articles ranging over the same topic, the goal is to extract a concise yet informative summary, which consists of most salient document sentences. An established ontological model has been used to improve summarization performance by integrating a textual entity recognition and disambiguation step. Furthermore, the analysis of the user-generated content coming from Twitter has been exploited to discover current social trends and improve the appealing of the generated summaries. An experimental evaluation of the SociONewSum performance was conducted on real English-written news article collections and Twitter posts. The achieved results demonstrate the effectiveness of the proposed summarizer, in terms of different ROUGE scores, compared to state-of-the-art open source summarizers as well as to a baseline version of the SociONewSum summarizer that does not perform any UGC analysis. Furthermore, the readability of the generated summaries has also been analyzed
    corecore