61 research outputs found

    Our Binary World

    Get PDF
    A poem about counting in binary, life, and love

    EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.

    Get PDF
    Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria

    Analysis of Eligibility Criteria Complexity in Clinical Trials

    Get PDF
    Formal, computer-interpretable representations of eligibility criteria would allow computers to better support key clinical research and care use cases such as eligibility determination. To inform the development of such formal representations for eligibility criteria, we conducted this study to characterize and quantify the complexity present in 1000 eligibility criteria randomly selected from studies in ClinicalTrials.gov. We classified the criteria by their complexity, semantic patterns, clinical content, and data sources. Our analyses revealed significant semantic and clinical content variability. We found that 93% of criteria were comprehensible, with 85% of these criteria having significant semantic complexity, including 40% relying on temporal data. We also identified several domains of clinical content. Using the findings of the study as requirements for computer-interpretable representations of eligibility, we discuss the challenges for creating such representations for use in clinical research and practice

    ExaCT: automatic extraction of clinical trial characteristics from journal publications

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Clinical trials are one of the most important sources of evidence for guiding evidence-based practice and the design of new trials. However, most of this information is available only in free text - e.g., in journal publications - which is labour intensive to process for systematic reviews, meta-analyses, and other evidence synthesis studies. This paper presents an automatic information extraction system, called ExaCT, that assists users with locating and extracting key trial characteristics (e.g., eligibility criteria, sample size, drug dosage, primary outcomes) from full-text journal articles reporting on randomized controlled trials (RCTs).</p> <p>Methods</p> <p>ExaCT consists of two parts: an information extraction (IE) engine that searches the article for text fragments that best describe the trial characteristics, and a web browser-based user interface that allows human reviewers to assess and modify the suggested selections. The IE engine uses a statistical text classifier to locate those sentences that have the highest probability of describing a trial characteristic. Then, the IE engine's second stage applies simple rules to these sentences to extract text fragments containing the target answer. The same approach is used for all 21 trial characteristics selected for this study.</p> <p>Results</p> <p>We evaluated ExaCT using 50 previously unseen articles describing RCTs. The text classifier (<it>first stage</it>) was able to recover 88% of relevant sentences among its top five candidates (top5 recall) with the topmost candidate being relevant in 80% of cases (top1 precision). Precision and recall of the extraction rules (<it>second stage</it>) were 93% and 91%, respectively. Together, the two stages of the extraction engine were able to provide (partially) correct solutions in 992 out of 1050 test tasks (94%), with a majority of these (696) representing fully correct and complete answers.</p> <p>Conclusions</p> <p>Our experiments confirmed the applicability and efficacy of ExaCT. Furthermore, they demonstrated that combining a statistical method with 'weak' extraction rules can identify a variety of study characteristics. The system is flexible and can be extended to handle other characteristics and document types (e.g., study protocols).</p

    The Human Studies Database Project: Federating Human Studies Design Data Using the Ontology of Clinical Research

    Get PDF
    Human studies, encompassing interventional and observational studies, are the most important source of evidence for advancing our understanding of health, disease, and treatment options. To promote discovery, the design and results of these studies should be made machine-readable for large-scale data mining, synthesis, and re-analysis. The Human Studies Database Project aims to define and implement an informatics infrastructure for institutions to share the design of their human studies. We have developed the Ontology of Clinical Research (OCRe) to model study features such as design type, interventions, and outcomes to support scientific query and analysis. We are using OCRe as the reference semantics for federated data sharing of human studies over caGrid, and are piloting this implementation with several Clinical and Translational Science Award (CTSA) institutions

    Ontology Mapping and Data Discovery for the Translational Investigator

    Get PDF
    An integrated data repository (IDR) containing aggregations of clinical, biomedical, economic, administrative, and public health data is a key component of an overall translational research infrastructure. But most available data repositories are designed using standard data warehouse architecture that employs arbitrary data encoding standards, making queries across disparate repositories difficult. In response to these shortcomings we have designed a Health Ontology Mapper (HOM) that translates terminologies into formal data encoding standards without altering the underlying source data. We believe the HOM system promotes inter-institutional data sharing and research collaboration, and will ultimately lower the barrier to developing and using an IDR

    The balance between IL-17 and IL-22 produced by liver-infiltrating T-helper cells critically controls NASH development in mice

    Get PDF
    Abstract The mechanisms responsible for the evolution of steatosis towards NASH (non-alcoholic steatohepatitis) and fibrosis are not completely defined. In the present study we evaluated the role of CD4 + T-helper (Th) cells in this process. We analysed the infiltration of different subsets of CD4 + Th cells in C57BL/6 mice fed on a MCD (methionine choline-deficient) diet, which is a model reproducing all phases of human NASH progression. There was an increase in Th17 cells at the beginning of NASH development and at the NASH-fibrosis transition, whereas levels of Th22 cells peaked between the first and the second expansion of Th17 cells. An increase in the production of IL (interleukin)-6, TNFα (tumour necrosis factor α), TGFβ (transforming growth factor β) and CCL20 (CC chemokine ligand 20) accompanied the changes in Th17/Th22 cells. Livers of IL-17 −/− mice were protected from NASH development and characterized by an extensive infiltration of Th22 cells. In vitro, IL-17 exacerbated the JNK (c-Jun N-terminal kinase)-dependent mouse hepatocyte lipotoxicity induced by palmitate. IL-22 prevented lipotoxicity through PI3K (phosphoinositide 3-kinase)-mediated inhibition of JNK, but did not play a protective role in the presence of IL-17, which up-regulated the PI3K/Akt inhibitor PTEN (phosphatase and tensin homologue deleted on chromosome 10). Consistently, livers of IL-17 −/− mice fed on the MCD diet displayed decreased activation of JNK, reduced expression of PTEN and increased phosphorylation of Akt compared with livers of wild-type mice. Hepatic infiltration of Th17 cells is critical for NASH initiation and development of fibrosis in mice, and reflects an infiltration of Th22 cells. Th22 cells are protective in NASH, but only in the absence of IL-17. These data strongly support the potentiality of clinical applications of IL-17 inhibitors that can prevent NASH by both abolishing the lipotoxic action of IL-17 and allowing IL-22-mediated protection

    A randomized trial provided new evidence on the accuracy and efficiency of traditional vs. electronically annotated abstraction approaches in systematic reviews

    Get PDF
    Abstract Objectives Data Abstraction Assistant (DAA) is a software for linking items abstracted into a data collection form for a systematic review to their locations in a study report. We conducted a randomized cross-over trial that compared DAA-facilitated single-data abstraction plus verification ("DAA verification"), single data abstraction plus verification ("regular verification"), and independent dual data abstraction plus adjudication ("independent abstraction"). Study Design and Setting This study is an online randomized cross-over trial with 26 pairs of data abstractors. Each pair abstracted data from six articles, two per approach. Outcomes were the proportion of errors and time taken. Results Overall proportion of errors was 17% for DAA verification, 16% for regular verification, and 15% for independent abstraction. DAA verification was associated with higher odds of errors when compared with regular verification (adjusted odds ratio [OR] = 1.08; 95% confidence interval [CI]: 0.99–1.17) or independent abstraction (adjusted OR = 1.12; 95% CI: 1.03–1.22). For each article, DAA verification took 20 minutes (95% CI: 1–40) longer than regular verification, but 46 minutes (95% CI: 26 to 66) shorter than independent abstraction. Conclusion Independent abstraction may only be necessary for complex data items. DAA provides an audit trail that is crucial for reproducible research

    Multifrequency variability of the blazar AO 0235+164 the WEBT campaign in 2004-2005 and long-term SED analysis

    Get PDF
    A huge multiwavelength campaign targeting the blazar AO 0235+164 was organized by the Whole Earth Blazar Telescope (WEBT) in 2003-2005 to study the variability properties of the source. Monitoring observations were carried out at cm and mm wavelengths, and in the near-IR and optical bands, while three pointings by the XMM-Newton satellite provided information on the X-ray and UV emission. We present the data acquired during the second observing season, 2004-2005, by 27 radio-to-optical telescopes. They reveal an increased near-IR and optical activity with respect to the previous season. Increased variability is also found at the higher radio frequencies, down to 15 GHz, but not at the lower ones. The radio (and optical) outburst predicted to peak around February-March 2004 on the basis of the previously observed 5-6 yr quasi-periodicity did not occur. The analysis of the optical light curves reveals now a longer characteristic time scale of 8 yr, which is also present in the radio data. The spectral energy distributions corresponding to the XMM-Newton observations performed during the WEBT campaign are compared with those pertaining to previous pointings of X-ray satellites. Bright, soft X-ray spectra can be described in terms of an extra component, which appears also when the source is faint through a hard UV spectrum and a curvature of the X-ray spectrum. Finally, there might be a correlation between the X-ray and optical bright states with a long time delay of about 5 yr, which would require a geometrical interpretation
    • …
    corecore