44 research outputs found

    Clinical Text Prediction with Numerically Grounded Conditional Language Models

    Get PDF
    Assisted text input techniques can save time and effort and improve text quality. In this paper, we investigate how grounded and conditional extensions to standard neural language models can bring improvements in the tasks of word prediction and completion. These extensions incorporate a structured knowledge base and numerical values from the text into the context used to predict the next word. Our automated evaluation on a clinical dataset shows extended models significantly outperform standard models. Our best system uses both conditioning and grounding, because of their orthogonal benefits. For word prediction with a list of 5 suggestions, it improves recall from 25.03% to 71.28% and for word completion it improves keystroke savings from 34.35% to 44.81%, where theoretical bound for this dataset is 58.78%. We also perform a qualitative investigation of how models with lower perplexity occasionally fare better at the tasks. We found that at test time numbers have more influence on the document level than on individual word probabilities

    Clinical Text Prediction with Numerically Grounded Conditional Language Models

    Get PDF
    Assisted text input techniques can save time and effort and improve text quality. In this paper, we investigate how grounded and conditional extensions to standard neural language models can bring improvements in the tasks of word prediction and completion. These extensions incorporate a structured knowledge base and numerical values from the text into the context used to predict the next word. Our automated evaluation on a clinical dataset shows extended models significantly outperform standard models. Our best system uses both conditioning and grounding, because of their orthogonal benefits. For word prediction with a list of 5 suggestions, it improves recall from 25.03% to 71.28% and for word completion it improves keystroke savings from 34.35% to 44.81%, where theoretical bound for this dataset is 58.78%. We also perform a qualitative investigation of how models with lower perplexity occasionally fare better at the tasks. We found that at test time numbers have more influence on the document level than on individual word probabilities

    Efficient index structures for and applications of the CompleteSearch engine

    Get PDF
    Traditional search engines, such as Google, offer response times well under one second, even for a corpus with more than a billion documents. They achieve this by making use of a (parallelized) inverted index. However, the inverted index is primarily designed to efficiently process simple key word queries, which is why search engines rarely offer support for queries which cannot be (re-)formulated in this manner, possibly using "special key words';. We have contrived data structures for the CompleteSearch engine, a search engine, developed at the Max-Planck Institute for Computer Science, which supports a far greater set of query types, without sacrificing the efficiency. It is built on top of a context-sensitive prefix search and completion mechanism. This mechanism is, on the one hand, simple enough to be efficiently realized by appropriate algorithms, and, on the other hand, powerful enough to be employed to support additional query types. We present two new data structures, which can be used to solve the underlying prefix search and completion problem. The first one, called AutoTree, has the theoretically desirable property that, for non-degenerate corpora and queries, its running time is proportional to the sum of the sizes of the input and output. The second one, called HYB, focuses on compressibility of the data and is optimized for scenarios, where the index does not fit in main memory but resides on disk. Both beat the baseline algorithm, using an inverted index, by a factor of 4-10 in terms of average processing time. A direct head-to-head comparison shows that, in a general setting, HYB outperforms AutoTree. Thanks to the HYB data structure, the CompleteSearch engine efficiently supports features such as faceted search for categorical information, completion to synonyms, support for basic database style queries on relational tables and the efficient search of ontologies. For each of these features, we demonstrate the viability of our approach through experiments. Finally, we also prove the practical relevance of our work through a small user study with employees of the helpdesk of our institute.Typische Suchmaschinen, wie z.B. Google, erreichen Antwortzeiten deutlich unter einer Sekunde, selbst fĂŒr einen Korpus mit mehr als einer Milliarde Dokumenten. Sie schaffen dies durch die Nutzung eines (parallelisierten) invertierten Index. Da der invertierte Index jedoch hauptsĂ€chlich fĂŒr die Bearbeitung von einfachen Schlagwortsuchen konzipiert ist, bieten Suchmaschinen nur selten die Möglichkeit, komplexere Anfragen zu beantworten, die sich nicht in solch eine Schlagwortsuche umformulieren lassen, u.U. mit der Zurhilfenahme von speziellen Kunstworten. Wir haben fĂŒr die CompleteSearch Suchmaschine, konzipiert und implementiert am Max-Planck-Institut fĂŒr Informatik, spezielle Datenstrukturen entwickelt, die ein deutlich grĂ¶ĂŸeres Spektrum an Anfragetypen unterstĂŒtzen, ohne dabei die Effizienz zu opfern. Die CompleteSearch Suchmaschine baut auf einem kontext-sensitiven PrĂ€fixsuch- und VervollstĂ€ndigungsmechanismus auf. Dieser Mechanismus ist einerseits einfach genug, um eine effiziente Implementierung zu erlauben, andererseits hinreichend mĂ€chtig, um die Bearbeitung zusĂ€tzlicher Anfragetypen zu erlauben. Wir stellen zwei neue Datenstrukturen vor, die eingesetzt werden können, um das zu Grunde liegende PrĂ€fixsuch und VervollstĂ€ngigungsproblem zu lösen. Die erste der beiden, AutoTree genannt, hat die theoretisch wĂŒnschenswerte Eigenschaft, dass sie fĂŒr nicht entartete Korpora eine Bearbeitungszeit linear in der aufsummierten GrĂ¶ĂŸe der Ein- und Ausgabe zulĂ€sst. Die zweite, HYB genannt, ist auf die Komprimierbarkeit der Daten ausgelegt und ist fĂŒr Szenarien optimiert, in denen der Index nicht in den Hauptspeicher passt, sondern auf der Festplatte ruht. Beide schlagen den Referenzalgorithmus, der den invertierten Index benutzt, um einen Faktor von 4-10 hinsichtlich der durchschnittlichen Bearbeitungszeit. Ein direkter Vergleich zeigt, dass im Allgemeinen HYB schneller ist als AutoTree. Dank der HYB Datenstruktur kann die CompleteSearch Suchmaschine auch anspruchsvollere Anfragetypen, wie Facettensuche fĂŒr Kategorieninformation, VervollstĂ€ndigung zu Synonymen, Anfragen im Stile von elementaren, relationalen Datenbankanfragen und die Suche auf Ontologien, effizient bearbeiten. FĂŒr jede dieser FĂ€higkeiten beweisen wir die Realisierbarkeit unseres Ansatzes durch Experimente. Schließlich demonstrieren wir durch eine kleine Nutzerstudie mit Mitarbeitern des Helpdesks unseres Institutes auch den praktischen Nutzen unserer Arbeit

    Normalizing Spontaneous Reports into MedDRA: some Experiments with MagiCoder

    Get PDF
    Text normalization into medical dictionaries is useful to support clinical task. A typical setting is Pharmacovigilance (PV). The manual detection of suspected adverse drug reactions (ADRs) in narrative reports is time consuming and Natural Language Processing (NLP) provides a concrete help to PV experts. In this paper we carry on experiments for testing performances of MagiCoder, an NLP application designed to extract MedDRA terms from narrative clinical text. Given a narrative description, MagiCoder proposes an automatic encoding. The pharmacologist reviews, (possibly) corrects, and then validates the solution. This drastically reduces the time needed for the validation of reports with respect to a completely manual encoding. In previous work we mainly tested MagiCoder performances on Italian written spontaneous reports. In this paper, we include some new features, change the experiment design, and carry on more tests about MagiCoder. Moreover, we do a change of language, moving to English documents. In particular, we tested MagiCoder on the CADEC dataset, a corpus of manually annotated posts about ADRs collected from social media

    Information retrieval and text mining technologies for chemistry

    Get PDF
    Efficient access to chemical information contained in scientific literature, patents, technical reports, or the web is a pressing need shared by researchers and patent attorneys from different chemical disciplines. Retrieval of important chemical information in most cases starts with finding relevant documents for a particular chemical compound or family. Targeted retrieval of chemical documents is closely connected to the automatic recognition of chemical entities in the text, which commonly involves the extraction of the entire list of chemicals mentioned in a document, including any associated information. In this Review, we provide a comprehensive and in-depth description of fundamental concepts, technical implementations, and current technologies for meeting these information demands. A strong focus is placed on community challenges addressing systems performance, more particularly CHEMDNER and CHEMDNER patents tasks of BioCreative IV and V, respectively. Considering the growing interest in the construction of automatically annotated chemical knowledge bases that integrate chemical information and biological data, cheminformatics approaches for mapping the extracted chemical names into chemical structures and their subsequent annotation together with text mining applications for linking chemistry with biological information are also presented. Finally, future trends and current challenges are highlighted as a roadmap proposal for research in this emerging field.A.V. and M.K. acknowledge funding from the European Community’s Horizon 2020 Program (project reference: 654021 - OpenMinted). M.K. additionally acknowledges the Encomienda MINETAD-CNIO as part of the Plan for the Advancement of Language Technology. O.R. and J.O. thank the Foundation for Applied Medical Research (FIMA), University of Navarra (Pamplona, Spain). This work was partially funded by Consellería de Cultura, Educación e Ordenación Universitaria (Xunta de Galicia), and FEDER (European Union), and the Portuguese Foundation for Science and Technology (FCT) under the scope of the strategic funding of UID/BIO/04469/2013 unit and COMPETE 2020 (POCI-01-0145-FEDER-006684). We thank Iñigo Garciá -Yoldi for useful feedback and discussions during the preparation of the manuscript.info:eu-repo/semantics/publishedVersio

    Robust Mobile Visual Recognition System: From Bag of Visual Words to Deep Learning

    Get PDF
    With billions of images captured by mobile users everyday, automatically recognizing contents in such images has become a particularly important feature for various mobile apps, including augmented reality, product search, visual-based authentication etc. Traditionally, a client-server architecture is adopted such that the mobile client sends captured images/video frames to a cloud server, which runs a set of task-specific computer vision algorithms and sends back the recognition results. However, such scheme may cause problems related to user privacy, network stability/availability and device energy.In this dissertation, we investigate the problem of building a robust mobile visual recognition system that achieves high accuracy, low latency, low energy cost and privacy protection. Generally, we study two broad types of recognition methods: the bag of visual words (BOVW) based retrieval methods, which search the nearest neighbor image to a query image, and the state-of-the-art deep learning based methods, which recognize a given image using a trained deep neural network. The challenges of deploying BOVW based retrieval methods include: size of indexed image database, query latency, feature extraction efficiency and re-ranking performance. To address such challenges, we first proposed EMOD which enables efficient on-device image retrieval on a downloaded context-dependent partial image database. The efficiency is achieved by analyzing the BOVW processing pipeline and optimizing each module with algorithmic improvement.Recent deep learning based recognition approaches have been shown to greatly exceed the performance of traditional approaches. We identify several challenges of applying deep learning based recognition methods on mobile scenarios, namely energy efficiency and privacy protection for real-time visual processing, and mobile visual domain biases. Thus, we proposed two techniques to address them, (i) efficiently splitting the workload across heterogeneous computing resources, i.e., mobile devices and the cloud using our Moca framework, and (ii) using mobile visual domain adaptation as proposed in our collaborative edge-mediated platform DeepCham. Our extensive experiments on large-scale benchmark datasets and off-the-shelf mobile devices show our solutions provide better results than the state-of-the-art solutions
    corecore