43,090 research outputs found

    Using natural language processing techniques to inform research on nanotechnology

    Get PDF
    Literature in the field of nanotechnology is exponentially increasing with more and more engineered nanomaterials being created, characterized, and tested for performance and safety. With the deluge of published data, there is a need for natural language processing approaches to semi-automate the cataloguing of engineered nanomaterials and their associated physico-chemical properties, performance, exposure scenarios, and biological effects. In this paper, we review the different informatics methods that have been applied to patent mining, nanomaterial/device characterization, nanomedicine, and environmental risk assessment. Nine natural language processing (NLP)-based tools were identified: NanoPort, NanoMapper, TechPerceptor, a Text Mining Framework, a Nanodevice Analyzer, a Clinical Trial Document Classifier, Nanotoxicity Searcher, NanoSifter, and NEIMiner. We conclude with recommendations for sharing NLP-related tools through online repositories to broaden participation in nanoinformatics

    From the digital data revolution to digital health and digital economy toward a digital society: Pervasiveness of Artificial Intelligence

    Get PDF
    Technological progress has led to powerful computers and communication technologies that penetrate nowadays all areas of science, industry and our private lives. As a consequence, all these areas are generating digital traces of data amounting to big data resources. This opens unprecedented opportunities but also challenges toward the analysis, management, interpretation and utilization of these data. Fortunately, recent breakthroughs in deep learning algorithms complement now machine learning and statistics methods for an efficient analysis of such data. Furthermore, advances in text mining and natural language processing, e.g., word-embedding methods, enable also the processing of large amounts of text data from diverse sources as governmental reports, blog entries in social media or clinical health records of patients. In this paper, we present a perspective on the role of artificial intelligence in these developments and discuss also potential problems we are facing in a digital society

    Characterizing environmental and phenotypic associations using information theory and electronic health records

    Get PDF
    The availability of up-to-date, executable, evidence-based medical knowledge is essential for many clinical applications, such as pharmacovigilance, but executable knowledge is costly to obtain and update. Automated acquisition of environmental and phenotypic associations in biomedical and clinical documents using text mining has showed some success. The usefulness of the association knowledge is limited, however, due to the fact that the specific relationships between clinical entities remain unknown. In particular, some associations are indirect relations due to interdependencies among the data. In this work, we develop methods using mutual information (MI) and its property, the data processing inequality (DPI), to help characterize associations that were generated based on use of natural language processing to encode clinical information in narrative patient records followed by statistical methods. Evaluation based on a random sample consisting of two drugs and two diseases indicates an overall precision of 81%. This preliminary study demonstrates that the proposed method is effective for helping to characterize phenotypic and environmental associations obtained from clinical reports

    Text Mining From Drug Surveillance Report Narratives

    Get PDF
    Analysis of postmarket drug surveillance reports is imperative to ensure drug safety and effectiveness. FAERS (FDA Adverse Event Reporting System) is a surveillance system that monitors Adverse Events (AEs) from drugs and biologic products. The AEs are reported through MedWatch voluntary reports (initiated from patients and healthcare providers) and mandatory reports (initiated from manufacturers). Much of the information in the voluntary AE reports is narratives or unstructured text. The increasing volume of individual reports, estimated at more than one million per year, poses a challenge for the staff to review large volume of narratives for drug clinical review. We are developing a computational approach using Natural Language Processing and UMLS MetaMap biomedical software to parse the narratives, recognize named-entities in the text and extract consumer/patient and related drug indications and adverse drug reaction information. The goal is to develop a text mining tool that automatically extracts relevant information from the report narratives which can be stored in pre-defined data fields in the FAERS database for efficient searching and querying during clinical review process

    Preprocessing messages posted by dentists to an Internet mailing list: a report of methods developed for a study of clinical content

    Get PDF
    Objectives: Mining social media artifacts requires substantial processing before content analyses. In this report, we describe our procedures for preprocessing 14,576 e-mail messages sent to a mailing list of several hundred dental professionals. Our goal was to transform the messages into a format useful for natural language processing (NLP) to enable subsequent discovery of clinical topics expressed in the corpus. Methods: Preprocessing involved message capture, database creation and import, extraction of multipurpose Internet mail extensions, decoding of encoded text, de-identification, and cleaning. We also developed a Web-based tool to identify signals for noisy strings and sections, and to verify the effectiveness of customized noise filters. We tailored our cleaning strategies to delete text and images that would impede NLP and in-depth content analyses. Before applying the full set of filters to each message, we determined an effective filter order. Results: Preprocessing messages improved effectiveness of NLP by 38%. Sources of noise included personal information in the salutation, the farewell, and the signature block; names and places mentioned in the body of the text; threads with quoted text; advertisements; embedded or attached images; spam- and virus-scanning notifications; auto text parts; e-mail addresses; and Web links. We identified 53 patterns of noise and delivered a set of de-identified and cleaned messages to the NLP analyst. Conclusion: Preprocessing electronic messages can markedly improve subsequent NLP to enable discovery of clinical topics. Keywords: Electronic mail; data processing; natural language processing; dental informatic

    Structuring the Unstructured: Unlocking pharmacokinetic data from journals with Natural Language Processing

    Get PDF
    The development of a new drug is an increasingly expensive and inefficient process. Many drug candidates are discarded due to pharmacokinetic (PK) complications detected at clinical phases. It is critical to accurately estimate the PK parameters of new drugs before being tested in humans since they will determine their efficacy and safety outcomes. Preclinical predictions of PK parameters are largely based on prior knowledge from other compounds, but much of this potentially valuable data is currently locked in the format of scientific papers. With an ever-increasing amount of scientific literature, automated systems are essential to exploit this resource efficiently. Developing text mining systems that can structure PK literature is critical to improving the drug development pipeline. This thesis studied the development and application of text mining resources to accelerate the curation of PK databases. Specifically, the development of novel corpora and suitable natural language processing architectures in the PK domain were addressed. The work presented focused on machine learning approaches that can model the high diversity of PK studies, parameter mentions, numerical measurements, units, and contextual information reported across the literature. Additionally, architectures and training approaches that could efficiently deal with the scarcity of annotated examples were explored. The chapters of this thesis tackle the development of suitable models and corpora to (1) retrieve PK documents, (2) recognise PK parameter mentions, (3) link PK entities to a knowledge base and (4) extract relations between parameter mentions, estimated measurements, units and other contextual information. Finally, the last chapter of this thesis studied the feasibility of the whole extraction pipeline to accelerate tasks in drug development research. The results from this thesis exhibited the potential of text mining approaches to automatically generate PK databases that can aid researchers in the field and ultimately accelerate the drug development pipeline. Additionally, the thesis presented contributions to biomedical natural language processing by developing suitable architectures and corpora for multiple tasks, tackling novel entities and relations within the PK domain

    Yleiskäyttöinen tekstinluokittelija suomenkielisille potilaskertomusteksteille

    Get PDF
    Medical texts are an underused source of data in clinical analytics. Extracting the relevant information from unstructured texts is difficult and while there are some tools available, they are often targeted for English texts. The situation is worse for smaller languages, such as Finnish. In this work, we reviewed literature in text mining and natural language processing fields in the scope of analyzing medical texts. Using the results of our literature review, we created an algorithm for information extraction from patient record texts. During this thesis work we created a decent text mining tool that works through text classification. This algorithm can be used detect medical conditions solely from medical texts. The usage of the algorithm is limited through the availability of large training data.Potilaskertomustekstejä käytetään kliinisessä analytiikassa huomattavan vähäisessä määrin. Olennaisen tiedon poimiminen tekstin joukosta on vaikeaa, ja vaikka siihen on työkaluja saatavilla, ovat ne useimmiten tehty englanninkielisille teksteille. Pienempien kielten, kuten suomen kohdalla tilanne on heikompi. Tässä työssä tehtiin kirjallisuuskatsaus tekstinlouhintaan ja luonnollisen kielen käsittelyyn liittyvään kirjallisuuteen, keskittyen erityisesti menetelmiin jotka soveltuvat lääketieteellisten tekstien analysointiin. Kirjallisuuskatsauksen pohjalta loimme algoritmin, joka soveltuu yleisesti lääketieteellisten tekstien luokitteluun. Tämän diplomityön osana luotiin tekstinlouhintatyökalu suomenkielisille lääketieteellisille teksteille. Kehitettyä algoritmia voidaan käyttää erilaisten tilojen tunnistamiseen potilaskertomusteksteistä. Algoritmin käyttöä kuitenkin rajoittaa tarve suurehkolle määrälle opetusdataa
    corecore