1,259 research outputs found

    Feedback-Driven Radiology Exam Report Retrieval with Semantics

    Get PDF
    Clinical documents are vital resources for radiologists to have a better understanding of patient history. The use of clinical documents can complement the often brief reasons for exams that are provided by physicians in order to perform more informed diagnoses. With the large number of study exams that radiologists have to perform on a daily basis, it becomes too time-consuming for radiologists to sift through each patient\u27s clinical documents. It is therefore important to provide a capability that can present contextually relevant clinical documents, and at the same time satisfy the diverse information needs among radiologists from different specialties. In this work, we propose a knowledge-based semantic similarity approach that uses domain-specific relationships such as part-of along with taxonomic relationships such as is-a to identify relevant radiology exam records. Our approach also incorporates explicit relevance feedback to personalize radiologists information needs. We evaluated our approach on a corpus of 6,265 radiology exam reports through study sessions with radiologists and demonstrated that the retrieval performance of our approach yields an improvement of 5% over the baseline. We further performed intra-class and inter-class similarities using a subset of 2,384 reports spanning across 10 exam codes. Our result shows that intra-class similarities are always higher than the inter-class similarities and our approach was able to obtain 6% percent improvement in intra-class similarities against the baseline. Our results suggest that the use of domain-specific relationships together with relevance feedback provides a significant value to improve the accuracy of the retrieval of radiology exam reports

    Semantic Annotation of Documents: A Comparative Study

    Full text link
    Semantic annotation, which is considered one of the semantic web applicative aspects, has been adopted by researchers from different communities as a paramount solution that improves searching and retrieval of information by promoting the richness of the content. However, researchers are facing challenges concerning both the quality and the relevance of the semantic annotations attached to the annotated document against its content as well as its semantics, without ignoring those regarding automation process which is supposed to ensure an optimal system for information indexing and retrieval. In this article, we will introduce the semantic annotation concept by presenting a state of the art including definitions, features and a classification of annotation systems. Systems and proposed approaches in the field will be cited, as well as a study of some existing annotation tools. This study will also pinpoint various problems and limitations related to the annotation in order to offer solutions for our future work

    Pragmatic Radiology Report Generation

    Full text link
    When pneumonia is not found on a chest X-ray, should the report describe this negative observation or omit it? We argue that this question cannot be answered from the X-ray alone and requires a pragmatic perspective, which captures the communicative goal that radiology reports serve between radiologists and patients. However, the standard image-to-text formulation for radiology report generation fails to incorporate such pragmatic intents. Following this pragmatic perspective, we demonstrate that the indication, which describes why a patient comes for an X-ray, drives the mentions of negative observations and introduce indications as additional input to report generation. With respect to the output, we develop a framework to identify uninferable information from the image as a source of model hallucinations, and limit them by cleaning groundtruth reports. Finally, we use indications and cleaned groundtruth reports to develop pragmatic models, and show that they outperform existing methods not only in new pragmatics-inspired metrics (+4.3 Negative F1) but also in standard metrics (+6.3 Positive F1 and +11.0 BLEU-2).Comment: 18 pages, 1 figure, 18 tables. Code at https://github.com/ChicagoHAI/llm_radiolog

    Presurgical language fMRI: Mapping of six critical regions.

    Get PDF
    Language mapping is a key goal in neurosurgical planning. fMRI mapping typically proceeds with a focus on Broca's and Wernicke's areas, although multiple other language-critical areas are now well-known. We evaluated whether clinicians could use a novel approach, including clinician-driven individualized thresholding, to reliably identify six language regions, including Broca's Area, Wernicke's Area (inferior, superior), Exner's Area, Supplementary Speech Area, Angular Gyrus, and Basal Temporal Language Area. We studied 22 epilepsy and tumor patients who received Wada and fMRI (age 36.4[12.5]; Wada language left/right/mixed in 18/3/1). fMRI tasks (two × three tasks) were analyzed by two clinical neuropsychologists who flexibly thresholded and combined these to identify the six regions. The resulting maps were compared to fixed threshold maps. Clinicians generated maps that overlapped significantly, and were highly consistent, when at least one task came from the same set. Cases diverged when clinicians prioritized different language regions or addressed noise differently. Language laterality closely mirrored Wada data (85% accuracy). Activation consistent with all six language regions was consistently identified. In blind review, three external, independent clinicians rated the individualized fMRI language maps as superior to fixed threshold maps; identified the majority of regions significantly more frequently; and judged language laterality to mirror Wada lateralization more often. These data provide initial validation of a novel, clinician-based approach to localizing language cortex. They also demonstrate clinical fMRI is superior when analyzed by an experienced clinician and that when fMRI data is of low quality judgments of laterality are unreliable and should be withheld. Hum Brain Mapp 38:4239-4255, 2017. © 2017 Wiley Periodicals, Inc

    A SOA-Based Platform to Support Clinical Data Sharing

    Get PDF
    The eSource Data Interchange Group, part of the Clinical Data Interchange Standards Consortium, proposed five scenarios to guide stakeholders in the development of solutions for the capture of eSource data. The fifth scenario was subdivided into four tiers to adapt the functionality of electronic health records to support clinical research. In order to develop a system belonging to the \u201cInteroperable\u201d Tier, the authors decided to adopt the service-oriented architecture paradigm to support technical interoperability, Health Level Seven Version 3 messages combined with LOINC (Logical Observation Identifiers Names and Codes) vocabulary to ensure semantic interoperability, and Healthcare Services Specification Project standards to provide process interoperability. The developed architecture enhances the integration between patient-care practice and medical research, allowing clinical data sharing between two hospital information systems and four clinical data management systems/clinical registries. The core is formed by a set of standardized cloud services connected through standardized interfaces, involving client applications. The system was approved by a medical staff, since it reduces the workload for the management of clinical trials. Although this architecture can realize the \u201cInteroperable\u201d Tier, the current solution actually covers the \u201cConnected\u201d Tier, due to local hospital policy restrictions

    Medical image retrieval for augmenting diagnostic radiology

    Get PDF
    Even though the use of medical imaging to diagnose patients is ubiquitous in clinical settings, their interpretations are still challenging for radiologists. Many factors make this interpretation task difficult, one of which is that medical images sometimes present subtle clues yet are crucial for diagnosis. Even worse, on the other hand, similar clues could indicate multiple diseases, making it challenging to figure out the definitive diagnoses. To help radiologists quickly and accurately interpret medical images, there is a need for a tool that can augment their diagnostic procedures and increase efficiency in their daily workflow. A general-purpose medical image retrieval system can be such a tool as it allows them to search and retrieve similar cases that are already diagnosed to make comparative analyses that would complement their diagnostic decisions. In this thesis, we contribute to developing such a system by proposing approaches to be integrated as modules of a single system, enabling it to handle various information needs of radiologists and thus augment their diagnostic processes during the interpretation of medical images. We have mainly studied the following retrieval approaches to handle radiologists’different information needs; i) Retrieval Based on Contents, ii) Retrieval Based on Contents, Patients’ Demographics, and Disease Predictions, and iii) Retrieval Based on Contents and Radiologists’ Text Descriptions. For the first study, we aimed to find an effective feature representation method to distinguish medical images considering their semantics and modalities. To do that, we have experimented different representation techniques based on handcrafted methods (mainly texture features) and deep learning (deep features). Based on the experimental results, we propose an effective feature representation approach and deep learning architectures for learning and extracting medical image contents. For the second study, we present a multi-faceted method that complements image contents with patients’ demographics and deep learning-based disease predictions, making it able to identify similar cases accurately considering the clinical context the radiologists seek. For the last study, we propose a guided search method that integrates an image with a radiologist’s text description to guide the retrieval process. This method guarantees that the retrieved images are suitable for the comparative analysis to confirm or rule out initial diagnoses (the differential diagnosis procedure). Furthermore, our method is based on a deep metric learning technique and is better than traditional content-based approaches that rely on only image features and, thus, sometimes retrieve insignificant random images

    Extracting a stroke phenotype risk factor from Veteran Health Administration clinical reports: an information content analysis

    Get PDF
    In the United States, 795,000 people suffer strokes each year; 10-15 % of these strokes can be attributed to stenosis caused by plaque in the carotid artery, a major stroke phenotype risk factor. Studies comparing treatments for the management of asymptom

    SPEECH TO CHART: SPEECH RECOGNITION AND NATURAL LANGUAGE PROCESSING FOR DENTAL CHARTING

    Get PDF
    Typically, when using practice management systems (PMS), dentists perform data entry by utilizing an assistant as a transcriptionist. This prevents dentists from interacting directly with the PMSs. Speech recognition interfaces can provide the solution to this problem. Existing speech interfaces of PMSs are cumbersome and poorly designed. In dentistry, there is a desire and need for a usable natural language interface for clinical data entry. Objectives. (1) evaluate the efficiency, effectiveness, and user satisfaction of the speech interfaces of four dental PMSs, (2) develop and evaluate a speech-to-chart prototype for charting naturally spoken dental exams. Methods. We evaluated the speech interfaces of four leading PMSs. We manually reviewed the capabilities of each system and then had 18 dental students chart 18 findings via speech in each of the systems. We measured time, errors, and user satisfaction. Next, we developed and evaluated a speech-to-chart prototype which contained the following components: speech recognizer; post-processor for error correction; NLP application (ONYX) and; graphical chart generator. We evaluated the accuracy of the speech recognizer and the post-processor. We then performed a summative evaluation on the entire system. Our prototype charted 12 hard tissue exams. We compared the charted exams to reference standard exams charted by two dentists. Results. Of the four systems, only two allowed both hard tissue and periodontal charting via speech. All interfaces required using specific commands directly comparable to using a mouse. The average time to chart the nine hard tissue findings was 2:48 and the nine periodontal findings was 2:06. There was an average of 7.5 errors per exam. We created a speech-to-chart prototype that supports natural dictation with no structured commands. On manually transcribed exams, the system performed with an average 80% accuracy. The average time to chart a single hard tissue finding with the prototype was 7.3 seconds. An improved discourse processor will greatly enhance the prototype's accuracy. Conclusions. The speech interfaces of existing PMSs are cumbersome, require using specific speech commands, and make several errors per exam. We successfully created a speech-to-chart prototype that charts hard tissue findings from naturally spoken dental exams
    • …
    corecore