4,978 research outputs found

    On the Generation of Medical Question-Answer Pairs

    Full text link
    Question answering (QA) has achieved promising progress recently. However, answering a question in real-world scenarios like the medical domain is still challenging, due to the requirement of external knowledge and the insufficient quantity of high-quality training data. In the light of these challenges, we study the task of generating medical QA pairs in this paper. With the insight that each medical question can be considered as a sample from the latent distribution of questions given answers, we propose an automated medical QA pair generation framework, consisting of an unsupervised key phrase detector that explores unstructured material for validity, and a generator that involves a multi-pass decoder to integrate structural knowledge for diversity. A series of experiments have been conducted on a real-world dataset collected from the National Medical Licensing Examination of China. Both automatic evaluation and human annotation demonstrate the effectiveness of the proposed method. Further investigation shows that, by incorporating the generated QA pairs for training, significant improvement in terms of accuracy can be achieved for the examination QA system.Comment: AAAI 202

    Utilização de dados estruturados na resposta a perguntas relacionadas com saúde

    Get PDF
    The current standard way of searching for information is through the usage of some kind of search engine. Even though there has been progress, it still is mainly based on the retrieval of a list of documents in which the words you searched for appear. Since the users goal is to find an answer to a question, having to look through multiple documents hoping that one of them have the information they are looking for is not very efficient. The aim of this thesis is to improve that process of searching for information, in this case of medical knowledge in two different ways, the first one is replacing the usual keywords used in search engines for something that is more natural to humans, a question in its natural form. The second one is to make use of the additional information that is present in a question format to provide the user an answer for that same question instead of a list of documents where those keywords are present. Since social media are the place where people replace the queries used on a search engine for questions that are usually answered by humans, it seems the natural place to look for the questions that we aim to provide with automatic answers. The first step to provide an answer to those questions will be to classify them in order to find what kind of information should be present in its answer. The second step is to identify the keywords that would be present if this was to be searched through the currently standard way. Having the keywords identified and knowing what kind of information the question aims to retrieve, it is now possible to map it into a query format and retrieve the information needed to provide an answer.Atualmente a forma mais comum de procurar informação é através da utilização de um motor de busca. Apesar de haver progresso os seus resultados continuam a ser maioritariamente baseados na devolução de uma lista de documentos onde estão presentes as palavras utilizadas na pesquisa, tendo o utilizador posteriormente que percorrer um conjunto dos documentos apresentados na esperança de obter a informação que procura. Para além de ser uma forma menos natural de procurar informação também é menos eficiente. O objetivo para esta tese é melhorar esse processo de procura de informação, sendo neste caso o foco a área da saúde. Estas melhorias aconteceriam de duas formas diferentes, sendo a primeira a substituição da query normalmente utilizada em motores de busca, por algo que nos é mais natural - uma pergunta. E a segunda seria aproveitar a informação adicional a que temos acesso apenas no formato de pergunta, para fornecer os dados necessários à sua resposta em vez de uma lista de documentos onde um conjunto de palavras-chave estão presentes. Sendo as redes sociais o local onde a busca por informação acontece através da utilização de perguntas, em substituição do que seria normal num motor de busca, pelo facto de a resposta nestas plataformas ser normalmente respondida por humanos e não máquinas. Parece assim ser o local natural para a recolha de perguntas para as quais temos o objetivo de fornecer uma ferramenta para a obtenção automática de uma resposta. O primeiro passo para ser possível fornecer esta resposta será a classificação das perguntas em diferentes tipos, tornando assim possível identificar qual a informação que se pretende obter. O segundo passo será identificar e categorizar as palavras de contexto biomédico presentes no texto fornecido, que seriam aquelas utilizadas caso a procura estivesse a ser feita utilizando as ferramentas convencionais. Tendo as palavras-chave sido identificadas e sabendo qual o tipo de informação que deverá estar presente na sua resposta. É agora possível mapear esta informação para um formato conhecido pelos computadores (query) e assim obter a informação pretendida.Mestrado em Engenharia Informátic

    Normalization of Disease Mentions with Convolutional Neural Networks

    Get PDF
    Normalization of disease mentions has an important role in biomedical natural language processing (BioNLP) applications, such as the construction of biomedical databases. Various disease mention normalization systems have been developed, though state-of-the-art systems either rely on candidate concept generation, or do not generalize to new concepts not seen during training. This thesis explores the possibility of building a disease mention normalization system that both generalizes to unseen concepts and does not rely on candidate generation. To this end, it is hypothesized that modern neural networks are sophisticated enough to solve this problem. This hypothesis is tested by building a normalization system using deep learning approaches, and evaluating the accuracy of this system on the NCBI disease corpus. The system leverages semantic information in the biomedical literature by using continuous vector space representations for strings of disease mentions and concepts. A neural encoder is trained to encode vector representations of strings of disease mentions and concepts. This encoder theoretically enables the model to generalize to unseen concepts during training. The encoded strings are used to compare the similarity between concepts and a given mention. Viewing normalization as a ranking problem, the concept with the highest similarity estimated is selected as the predicted concept for the mention. For the development of the system, synthetic data is used for pre-training to facilitate the learning of the model. In addition, various architectures are explored. While the model succeeds in prediction without candidate concept generation, its performance is not comparable to those of the state-of-the-art systems. Normalization of disease mentions without candidate generation while including the possibility for the system to generalize to unseen concepts is not trivial. Further efforts can be focused on, for example, testing more neural architectures, and the use of more sophisticated word representations

    Automated extraction of genes associated with antibiotic resistance from the biomedical literature

    Get PDF
    The detection of bacterial antibiotic resistance phenotypes is important when carrying out clinical decisions for patient treatment. Conventional phenotypic testing involves culturing bacteria which requires a significant amount of time and work. Whole-genome sequencing is emerging as a fast alternative to resistance prediction, by considering the presence/absence of certain genes. A lot of research has focused on determining which bacterial genes cause antibiotic resistance and efforts are being made to consolidate these facts in knowledge bases (KBs). KBs are usually manually curated by domain experts to be of the highest quality. However, this limits the pace at which new facts are added. Automated relation extraction of gene-antibiotic resistance relations from the biomedical literature is one solution that can simplify the curation process. This paper reports on the development of a text mining pipeline that takes in English biomedical abstracts and outputs genes that are predicted to cause resistance to antibiotics. To test the generalisability of this pipeline it was then applied to predict genes associated with Helicobacter pylori antibiotic resistance, that are not present in common antibiotic resistance KBs or publications studying H. pylori. These genes would be candidates for further lab-based antibiotic research and inclusion in these KBs. For relation extraction, state-of-the-art deep learning models were used. These models were trained on a newly developed silver corpus which was generated by distant supervision of abstracts using the facts obtained from KBs. The top performing model was superior to a co-occurrence model, achieving a recall of 95%, a precision of 60% and F1-score of 74% on a manually annotated holdout dataset. To our knowledge, this project was the first attempt at developing a complete text mining pipeline that incorporates deep learning models to extract gene-antibiotic resistance relations from the literature. Additional related data can be found at https://github.com/AndreBrincat/Gene-Antibiotic-Resistance-Relation-Extractio

    On the Use of Parsing for Named Entity Recognition

    Get PDF
    [Abstract] Parsing is a core natural language processing technique that can be used to obtain the structure underlying sentences in human languages. Named entity recognition (NER) is the task of identifying the entities that appear in a text. NER is a challenging natural language processing task that is essential to extract knowledge from texts in multiple domains, ranging from financial to medical. It is intuitive that the structure of a text can be helpful to determine whether or not a certain portion of it is an entity and if so, to establish its concrete limits. However, parsing has been a relatively little-used technique in NER systems, since most of them have chosen to consider shallow approaches to deal with text. In this work, we study the characteristics of NER, a task that is far from being solved despite its long history; we analyze the latest advances in parsing that make its use advisable in NER settings; we review the different approaches to NER that make use of syntactic information; and we propose a new way of using parsing in NER based on casting parsing itself as a sequence labeling task.Xunta de Galicia; ED431C 2020/11Xunta de Galicia; ED431G 2019/01This work has been funded by MINECO, AEI and FEDER of UE through the ANSWER-ASAP project (TIN2017-85160-C2-1-R); and by Xunta de Galicia through a Competitive Reference Group grant (ED431C 2020/11). CITIC, as Research Center of the Galician University System, is funded by the Consellería de Educación, Universidade e Formación Profesional of the Xunta de Galicia through the European Regional Development Fund (ERDF/FEDER) with 80%, the Galicia ERDF 2014-20 Operational Programme, and the remaining 20% from the Secretaría Xeral de Universidades (Ref. ED431G 2019/01). Carlos Gómez-Rodríguez has also received funding from the European Research Council (ERC), under the European Union’s Horizon 2020 research and innovation programme (FASTPARSE, Grant No. 714150)

    NERO: a biomedical named-entity (recognition) ontology with a large, annotated corpus reveals meaningful associations through text embedding.

    Get PDF
    Machine reading (MR) is essential for unlocking valuable knowledge contained in millions of existing biomedical documents. Over the last two decades1,2, the most dramatic advances in MR have followed in the wake of critical corpus development3. Large, well-annotated corpora have been associated with punctuated advances in MR methodology and automated knowledge extraction systems in the same way that ImageNet4 was fundamental for developing machine vision techniques. This study contributes six components to an advanced, named entity analysis tool for biomedicine: (a) a new, Named Entity Recognition Ontology (NERO) developed specifically for describing textual entities in biomedical texts, which accounts for diverse levels of ambiguity, bridging the scientific sublanguages of molecular biology, genetics, biochemistry, and medicine; (b) detailed guidelines for human experts annotating hundreds of named entity classes; (c) pictographs for all named entities, to simplify the burden of annotation for curators; (d) an original, annotated corpus comprising 35,865 sentences, which encapsulate 190,679 named entities and 43,438 events connecting two or more entities; (e) validated, off-the-shelf, named entity recognition (NER) automated extraction, and; (f) embedding models that demonstrate the promise of biomedical associations embedded within this corpus

    MGCN: Medical Relation Extraction Based on GCN

    Get PDF
    With the progress of society and the improvement of living standards, people pay more and more attention to personal health, and WITMED (Wise Information Technology of med) has occupied an important position. The relationship prediction work in the medical field has high requirements on the interpretability of the method, but the relationship between medical entities is complex, and the existing methods are difficult to meet the requirements. This paper proposes a novel medical information relation extraction method MGCN, which combines contextual information to provide global interpretability for relation prediction of medical entities. The method uses Co-occurrence Graph and Graph Convolutional Network to build up a network of relations between entities, uses the Open-world Assumption to construct potential relations between associated entities, and goes through the Knowledge-aware Attention mechanism to give relation prediction for the entity pair of interest. Experiments were conducted on a public medical dataset CTF, MGCN achieved the score of 0.831, demonstrating its effectiveness in medical relation extraction

    Automated Georeferencing of Antarctic Species

    Get PDF
    Many text documents in the biological domain contain references to the toponym of specific phenomena (e.g. species sightings) in natural language form "In Garwood Valley summer activity was 0.2% for Umbilicaria aprina and 1.7% for Caloplaca sp. ..." While methods have been developed to extract place names from documents, and attention has been given to the interpretation of spatial prepositions, the ability to connect toponym mentions in text with the phenomena to which they refer (in this case species) has been given limited attention, but would be of considerable benefit for the task of mapping specific phenomena mentioned in text documents. As part of work to create a pipeline to automate georeferencing of species within legacy documents, this paper proposes a method to: (1) recognise species and toponyms within text and (2) match each species mention to the relevant toponym mention. Our methods find significant promise in a bespoke rules- and dictionary-based approach to recognise species within text (F1 scores up to 0.87 including partial matches) but less success, as yet, recognising toponyms using multiple gazetteers combined with an off the shelf natural language processing tool (F1 up to 0.62). Most importantly, we offer a contribution to the relatively nascent area of matching toponym references to the object they locate (in our case species), including cases in which the toponym and species are in different sentences. We use tree-based models to achieve precision as high as 0.88 or an F1 score up to 0.68 depending on the downsampling rate. Initial results out perform previous research on detecting entity relationships that may cross sentence boundaries within biomedical text, and differ from previous work in specifically addressing species mapping
    corecore