83,027 research outputs found

    Automatic Segmentation of Subfigure Image Panels for Multimodal Biomedical Document Retrieval

    Get PDF
    Biomedical images are often referenced for clinical decision support (CDS), educational purposes, and research. The task of automatically finding the images in a scientific article that are most useful for the purpose of determining relevance to a clinical situation is traditionally done using text and is quite challenging. We propose to improve this by associating image features from the entire image and from relevant regions of interest with biomedical concepts described in the figure caption or discussion in the article. However, images used in scientific article figures are often composed of multiple panels where each sub-figure (panel) is referenced in the caption using alphanumeric labels, e.g. Figure 1(a), 2(c), etc. It is necessary to separate individual panels from a multi-panel figure as a first step toward automatic annotation of images. In this work we present methods that add make robust our previous efforts reported here. Specifically, we address the limitation in segmenting figures that do not exhibit explicit inter-panel boundaries, e.g. illustrations, graphs, and charts. We present a novel hybrid clustering algorithm based on particle swarm optimization (PSO) with fuzzy logic controller (FLC) to locate related figure components in such images. Results from our evaluation are very promising with 93.64% panel detection accuracy for regular (non-illustration) figure images and 92.1% accuracy for illustration images. A computational complexity analysis also shows that PSO is an optimal approach with relatively low computation time. The accuracy of separating these two type images is 98.11% and is achieved using decision tree

    External validation of a convolutional neural network for the automatic segmentation of intraprostatic tumor lesions on 68Ga-PSMA PET images

    Get PDF
    Introduction: State of the art artificial intelligence (AI) models have the potential to become a "one-stop shop " to improve diagnosis and prognosis in several oncological settings. The external validation of AI models on independent cohorts is essential to evaluate their generalization ability, hence their potential utility in clinical practice. In this study we tested on a large, separate cohort a recently proposed state-of-the-art convolutional neural network for the automatic segmentation of intraprostatic cancer lesions on PSMA PET images.Methods: Eighty-five biopsy proven prostate cancer patients who underwent Ga-68 PSMA PET for staging purposes were enrolled in this study. Images were acquired with either fully hybrid PET/MRI (N = 46) or PET/CT (N = 39); all participants showed at least one intraprostatic pathological finding on PET images that was independently segmented by two Nuclear Medicine physicians. The trained model was available at and data processing has been done in agreement with the reference work.Results: When compared to the manual contouring, the AI model yielded a median dice score = 0.74, therefore showing a moderately good performance. Results were robust to the modality used to acquire images (PET/CT or PET/MRI) and to the ground truth labels (no significant difference between the model's performance when compared to reader 1 or reader 2 manual contouring).Discussion: In conclusion, this AI model could be used to automatically segment intraprostatic cancer lesions for research purposes, as instance to define the volume of interest for radiomics or deep learning analysis. However, more robust performance is needed for the generation of AI-based decision support technologies to be proposed in clinical practice

    PadChest: A large chest x-ray image dataset with multi-label annotated reports

    Get PDF
    We present a labeled large-scale, high resolution chest x-ray dataset for the automated exploration of medical images along with their associated reports. This dataset includes more than 160,000 images obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital San Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional information on image acquisition and patient demography. The reports were labeled with 174 different radiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical taxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of these reports, 27% were manually annotated by trained physicians and the remaining set was labeled using a supervised method based on a recurrent neural network with attention mechanisms. The labels generated were then validated in an independent test set achieving a 0.93 Micro-F1 score. To the best of our knowledge, this is one of the largest public chest x-ray database suitable for training supervised models concerning radiographs, and the first to contain radiographic reports in Spanish. The PadChest dataset can be downloaded from http://bimcv.cipf.es/bimcv-projects/padchest/

    A survey on utilization of data mining approaches for dermatological (skin) diseases prediction

    Get PDF
    Due to recent technology advances, large volumes of medical data is obtained. These data contain valuable information. Therefore data mining techniques can be used to extract useful patterns. This paper is intended to introduce data mining and its various techniques and a survey of the available literature on medical data mining. We emphasize mainly on the application of data mining on skin diseases. A categorization has been provided based on the different data mining techniques. The utility of the various data mining methodologies is highlighted. Generally association mining is suitable for extracting rules. It has been used especially in cancer diagnosis. Classification is a robust method in medical mining. In this paper, we have summarized the different uses of classification in dermatology. It is one of the most important methods for diagnosis of erythemato-squamous diseases. There are different methods like Neural Networks, Genetic Algorithms and fuzzy classifiaction in this topic. Clustering is a useful method in medical images mining. The purpose of clustering techniques is to find a structure for the given data by finding similarities between data according to data characteristics. Clustering has some applications in dermatology. Besides introducing different mining methods, we have investigated some challenges which exist in mining skin data

    Precise Proximal Femur Fracture Classification for Interactive Training and Surgical Planning

    Full text link
    We demonstrate the feasibility of a fully automatic computer-aided diagnosis (CAD) tool, based on deep learning, that localizes and classifies proximal femur fractures on X-ray images according to the AO classification. The proposed framework aims to improve patient treatment planning and provide support for the training of trauma surgeon residents. A database of 1347 clinical radiographic studies was collected. Radiologists and trauma surgeons annotated all fractures with bounding boxes, and provided a classification according to the AO standard. The proposed CAD tool for the classification of radiographs into types "A", "B" and "not-fractured", reaches a F1-score of 87% and AUC of 0.95, when classifying fractures versus not-fractured cases it improves up to 94% and 0.98. Prior localization of the fracture results in an improvement with respect to full image classification. 100% of the predicted centers of the region of interest are contained in the manually provided bounding boxes. The system retrieves on average 9 relevant images (from the same class) out of 10 cases. Our CAD scheme localizes, detects and further classifies proximal femur fractures achieving results comparable to expert-level and state-of-the-art performance. Our auxiliary localization model was highly accurate predicting the region of interest in the radiograph. We further investigated several strategies of verification for its adoption into the daily clinical routine. A sensitivity analysis of the size of the ROI and image retrieval as a clinical use case were presented.Comment: Accepted at IPCAI 2020 and IJCAR

    User-centered visual analysis using a hybrid reasoning architecture for intensive care units

    Get PDF
    One problem pertaining to Intensive Care Unit information systems is that, in some cases, a very dense display of data can result. To ensure the overview and readability of the increasing volumes of data, some special features are required (e.g., data prioritization, clustering, and selection mechanisms) with the application of analytical methods (e.g., temporal data abstraction, principal component analysis, and detection of events). This paper addresses the problem of improving the integration of the visual and analytical methods applied to medical monitoring systems. We present a knowledge- and machine learning-based approach to support the knowledge discovery process with appropriate analytical and visual methods. Its potential benefit to the development of user interfaces for intelligent monitors that can assist with the detection and explanation of new, potentially threatening medical events. The proposed hybrid reasoning architecture provides an interactive graphical user interface to adjust the parameters of the analytical methods based on the users' task at hand. The action sequences performed on the graphical user interface by the user are consolidated in a dynamic knowledge base with specific hybrid reasoning that integrates symbolic and connectionist approaches. These sequences of expert knowledge acquisition can be very efficient for making easier knowledge emergence during a similar experience and positively impact the monitoring of critical situations. The provided graphical user interface incorporating a user-centered visual analysis is exploited to facilitate the natural and effective representation of clinical information for patient care

    A review of research into the development of radiologic expertise: Implications for computer-based training

    Get PDF
    Rationale and Objectives. Studies of radiologic error reveal high levels of variation between radiologists. Although it is known that experts outperform novices, we have only limited knowledge about radiologic expertise and how it is acquired.Materials and Methods. This review identifies three areas of research: studies of the impact of experience and related factors on the accuracy of decision-making; studies of the organization of expert knowledge; and studies of radiologists' perceptual processes.Results and Conclusion. Interpreting evidence from these three paradigms in the light of recent research into perceptual learning and studies of the visual pathway has a number of conclusions for the training of radiologists, particularly for the design of computer-based learning programs that are able to illustrate the similarities and differences between diagnoses, to give access to large numbers of cases and to help identify weaknesses in the way trainees build up a global representation from fixated regions
    • ā€¦
    corecore