8 research outputs found

    Inter-Coder Agreement for Computational Linguistics

    Get PDF
    This article is a survey of methods for measuring agreement among corpus annotators. It exposes the mathematics and underlying assumptions of agreement coefficients, covering Krippendorff's alpha as well as Scott's pi and Cohen's kappa; discusses the use of coefficients in several annotation tasks; and argues that weighted, alpha-like coefficients, traditionally less used than kappa-like measures in computational linguistics, may be more appropriate for many corpus annotation tasks—but that their use makes the interpretation of the value of the coefficient even harder. </jats:p

    Annotation of Pharmaceutical Assay Data Using Text Mining Techniques

    Get PDF
    Legacy data stores of experimental assay data in a pharmaceutical R&D organization are poorly structured and annotated, which hinders the integration of these data with data from more recent research programs and from other publicly available clinical, biological and chemical data sources. Being able to integrate and analyze this data in aggregate will help maximize the value of the available data, which will help inform and potentially improve the drug discovery process. In this study, text mining and information extraction tools and techniques were applied to improve the annotation of a subset of these data in an accurate and automated fashion. Experimental results of this study show promise for classifying some features of the available assay data. Initial results of classification using a Naïve-Bayes classifier provided high values of accuracy (up to 93%). This indicates that the methods described in this study can be extended to larger dataset to extract more annotation from the available data.Master of Science in Information Scienc

    A machine learning approach to identify clinical trials involving nanodrugs and nanodevices from ClinicalTrials.gov

    Get PDF
    BACKGROUND: Clinical Trials (CTs) are essential for bridging the gap between experimental research on new drugs and their clinical application. Just like CTs for traditional drugs and biologics have helped accelerate the translation of biomedical findings into medical practice, CTs for nanodrugs and nanodevices could advance novel nanomaterials as agents for diagnosis and therapy. Although there is publicly available information about nanomedicine-related CTs, the online archiving of this information is carried out without adhering to criteria that discriminate between studies involving nanomaterials or nanotechnology-based processes (nano), and CTs that do not involve nanotechnology (non-nano). Finding out whether nanodrugs and nanodevices were involved in a study from CT summaries alone is a challenging task. At the time of writing, CTs archived in the well-known online registry ClinicalTrials.gov are not easily told apart as to whether they are nano or non-nano CTs-even when performed by domain experts, due to the lack of both a common definition for nanotechnology and of standards for reporting nanomedical experiments and results. METHODS: We propose a supervised learning approach for classifying CT summaries from ClinicalTrials.gov according to whether they fall into the nano or the non-nano categories. Our method involves several stages: i) extraction and manual annotation of CTs as nano vs. non-nano, ii) pre-processing and automatic classification, and iii) performance evaluation using several state-of-the-art classifiers under different transformations of the original dataset. RESULTS AND CONCLUSIONS: The performance of the best automated classifier closely matches that of experts (AUC over 0.95), suggesting that it is feasible to automatically detect the presence of nanotechnology products in CT summaries with a high degree of accuracy. This can significantly speed up the process of finding whether reports on ClinicalTrials.gov might be relevant to a particular nanoparticle or nanodevice, which is essential to discover any precedents for nanotoxicity events or advantages for targeted drug therapy

    Plataforma colaborativa de anotação de literatura biomédica

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaWith the overwhelming amount of biomedical textual information being produced, several manual curation efforts have been set up to extract and store concepts and their relationships into structured resources. Since manual annotation is a very demanding and expensive task, computerized solutions were developed to perform such tasks automatically. Nevertheless, high-end information extraction techniques are still not widely used by biomedical research communities, mainly due to the lack of standards and limitations in usability. Interactive annotation tools intend to fill this gap, taking advantage of automatic techniques and existing knowledge bases to assist expert curators in their daily tasks. This thesis presents Egas, a web-based platform for biomedical text mining and assisted curation with highly usable interfaces for manual and automatic inline annotation of concepts and relations. Furthermore, a comprehensive set of knowledge bases are integrated and indexed to provide straightforward concept normalization features. Additionally, curators can also rely on real-time collaboration and conversation functionalities allowing discussing details of the annotation task as well as providing instant feedback of curators interactions. Egas also provides interfaces for on-demand management of the annotation task settings and guidelines, and supports standard formats and literature services to import and export documents. By taking advantage of Egas, we participated in the BioCreative IV interactive annotation task, targeting the assisted identification of protein-protein interactions described in PubMed abstracts related to neuropathological disorders. Thereby, when evaluated by expert curators, Egas obtained very positive scores in terms of usability, reliability and performance. These results, together with the provided innovative features, place Egas as a state-of-the-art solution for fast and accurate curation of information, facilitating the task of creating and updating knowledge bases in a more consistent way.Com o acréscimo da quantidade de literatura biomédica a ser produzida todos os dias, vários esforços têm sido feitos para tentar extrair e armazenar de forma estruturada os conceitos e as relações nela presentes. Por outro lado, uma vez que a extração manual de conceitos compreende uma tarefa extremamente exigente e exaustiva, algumas soluções de anotação automática foram surgindo. No entanto, mesmo os sistemas de anotação mais completos não têm sido muito bem recebidos no seio das equipas de investigação, em grande parte devido às falhas a nível de usabilidade e de interface standards. Para colmatar esta falha são necessárias ferramentas de anotação interativa, que tirem proveito de sistemas de anotação automática e de bases de dados já existentes, para ajudar os anotadores nas suas tarefas do dia-a-dia. Nesta dissertação iremos apresentar uma plataforma de anotação de literatura biomédica orientada para a usabilidade e que suporta anotação manual e automática. No mesmo sentido, integramos no sistema várias bases de dados, no intuito de facilitar a normalização dos conceitos anotados. Por outro lado, os utilizadores podem também contar com funcionalidades colaborativas em toda a aplicação, estimulando assim a interação entre os anotadores e, desta forma, a produção de melhores resultados. O sistema apresenta ainda funcionalidades para importar e exportar ficheiros, gestão de projetos e diretivas de anotação. Com esta plataforma, Egas, participámos na tarefa de anotação interativa do BioCreative IV (IAT), nomeadamente na identificação de interações proteína-proteína. Depois de avaliado por um conjunto de anotadores, o Egas obteve os melhores resultados entre os sistemas apresentados, relativamente à usabilidade, confiança e desempenho

    Genomics data integration for knowledge discovery using genome annotations from molecular databases and scientific literature

    Get PDF
    One of the major global challenges of today is to meet the food demands of an ever increasing population (food demand will increase by 50% in 2030). One approach to address this challenge is to breed new crop varieties that yield more even under unfavorable conditions e.g. have improved tolerance to drought and/or resistance to pathogens. However, designing a breeding program is a laborious and time consuming effort that often lacks the capacity to generate new cultivars quickly in response to the required traits. Recent advances in biotechnology and genomics data science have the potential to accelerate and precise breeding programs greatly. As large-scale genomic data sets for crop species are available in multiple independent data sources and scientific literature, this thesis provides innovative technologies that use natural language processing (NLP) and semantic web technologies to address challenges of integrating genomic data for improving plant breeding. Firstly, in this research study, we developed a supervised Natural language processing (NLP) model with the help of IBM Watson, to extract knowledge networks containing genotypic-phenotypic associations of potato tuber flesh color from the scientific literature. Secondly, a table mining tool called QTLTableMiner++ (QTM) was developed which enables knowledge discovery of novel genomic regions (such as QTL regions), which positively or negatively affect the traits of interest. The objective of both above mentioned, NLP techniques was to extract information which is implicitly described in the literature and is not available in structured resources, like databases. Thirdly, with the help of semantic web technology, a linked-data platform called Solanaceae linked data platform(pbg-ld) was developed, to semantically integrates geno- and pheno-typic data of Solanaceae species. This platform combines both unstructured data from scientific literature and structured data from publicly available biological databases using the Linked Data approach. Lastly, analysis workflows for prioritizing candidate genes with QTL regions were tested using pbg-ld. Hence, this research provides in-silico knowledge discovery tools and genomic data infrastructure, which aids researchers and breeders in the design of a precise and improved breeding program.</p
    corecore