3,073 research outputs found

    Knowledge-based best of breed approach for automated detection of clinical events based on German free text digital hospital discharge letters

    Get PDF
    OBJECTIVES: The secondary use of medical data contained in electronic medical records, such as hospital discharge letters, is a valuable resource for the improvement of clinical care (e.g. in terms of medication safety) or for research purposes. However, the automated processing and analysis of medical free text still poses a huge challenge to available natural language processing (NLP) systems. The aim of this study was to implement a knowledge-based best of breed approach, combining a terminology server with integrated ontology, a NLP pipeline and a rules engine. METHODS: We tested the performance of this approach in a use case. The clinical event of interest was the particular drug-disease interaction "proton-pump inhibitor [PPI] use and osteoporosis". Cases were to be identified based on free text digital discharge letters as source of information. Automated detection was validated against a gold standard. RESULTS: Precision of recognition of osteoporosis was 94.19%, and recall was 97.45%. PPIs were detected with 100% precision and 97.97% recall. The F-score for the detection of the given drug-disease-interaction was 96,13%. CONCLUSION: We could show that our approach of combining a NLP pipeline, a terminology server, and a rules engine for the purpose of automated detection of clinical events such as drug-disease interactions from free text digital hospital discharge letters was effective. There is huge potential for the implementation in clinical and research contexts, as this approach enables analyses of very high numbers of medical free text documents within a short time period

    Deep Learning for Osteoporosis Classification Using Hip Radiographs and Patient Clinical Covariates

    Get PDF
    This study considers the use of deep learning to diagnose osteoporosis from hip radiographs, and whether adding clinical data improves diagnostic performance over the image mode alone. For objective labeling, we collected a dataset containing 1131 images from patients who underwent both skeletal bone mineral density measurement and hip radiography at a single general hospital between 2014 and 2019. Osteoporosis was assessed from the hip radiographs using five convolutional neural network (CNN) models. We also investigated ensemble models with clinical covariates added to each CNN. The accuracy, precision, recall, specificity, negative predictive value (npv), F1 score, and area under the curve (AUC) score were calculated for each network. In the evaluation of the five CNN models using only hip radiographs, GoogleNet and EfficientNet b3 exhibited the best accuracy, precision, and specificity. Among the five ensemble models, EfficientNet b3 exhibited the best accuracy, recall, npv, F1 score, and AUC score when patient variables were included. The CNN models diagnosed osteoporosis from hip radiographs with high accuracy, and their performance improved further with the addition of clinical covariates from patient records

    Modular Neural Networks for Osteoporosis Detection in Mandibular Cone-Beam Computed Tomography Scans

    Get PDF
    Publisher Copyright: © 2023 by the authors.In this technical note, we examine the capabilities of deep convolutional neural networks (DCNNs) for diagnosing osteoporosis through cone-beam computed tomography (CBCT) scans of the mandible. The evaluation was conducted using 188 patients’ mandibular CBCT images utilizing DCNN models built on the ResNet-101 framework. We adopted a segmented three-phase method to assess osteoporosis. Stage 1 focused on mandibular bone slice identification, Stage 2 pinpointed the coordinates for mandibular bone cross-sectional views, and Stage 3 computed the mandibular bone’s thickness, highlighting osteoporotic variances. The procedure, built using ResNet-101 networks, showcased efficacy in osteoporosis detection using CBCT scans: Stage 1 achieved a remarkable 98.85% training accuracy, Stage 2 minimized L1 loss to a mere 1.02 pixels, and the last stage’s bone thickness computation algorithm reported a mean squared error of 0.8377. These findings underline the significant potential of AI in osteoporosis identification and its promise for enhanced medical care. The compartmentalized method endorses a sturdier DCNN training and heightened model transparency. Moreover, the outcomes illustrate the efficacy of a modular transfer learning method for osteoporosis detection, even when relying on limited mandibular CBCT datasets. The methodology given is accompanied by the source code available on GitLab.Peer reviewe

    The reliability of cephalometric tracing using AI

    Full text link
    Introduction : L'objectif de cette étude est de comparer la différence entre l'analyse céphalométrique manuelle et l'analyse automatisée par l’intelligence artificielle afin de confirmer la fiabilité de cette dernière. Notre hypothèse de recherche est que la technique manuelle est la plus fiable des deux méthodes. Méthode : Un total de 99 radiographies céphalométriques latérales sont recueillies. Des tracés par technique manuelle (MT) et par localisation automatisée par intelligence artificielle (AI) sont réalisés pour toutes les radiographies. La localisation de 29 points céphalométriques couramment utilisés est comparée entre les deux groupes. L'erreur radiale moyenne (MRE) et un taux de détection réussie (SDR) de 2 mm sont utilisés pour comparer les deux groupes. Le logiciel AudaxCeph version 6.2.57.4225 est utilisé pour l'analyse manuelle et l'analyse AI. Résultats : Le MRE et SDR pour le test de fiabilité inter-examinateur sont respectivement de 0,87 ± 0,61mm et 95%. Pour la comparaison entre la technique manuelle MT et le repérage par intelligence artificielle AI, le MRE et SDR pour tous les repères sont respectivement de 1,48 ± 1,42 mm et 78 %. Lorsque les repères dentaires sont exclus, le MRE diminue à 1,33 ± 1,39 mm et le SDR augmente à 84 %. Lorsque seuls les repères des tissus durs sont inclus (excluant les points des tissus mous et dentaires), le MRE diminue encore à 1,25 ± 1,09 mm et le SDR augmente à 85 %. Lorsque seuls les points de repère des tissus mous sont inclus, le MRE augmente à 1,68 ± 1,89 mm et le SDR diminue à 78 %. Conclusion: La performance du logiciel est similaire à celles précédemment rapportée dans la littérature pour des logiciels utilisant un cadre de modélisation similaire. Nos résultats révèlent que le repérage manuel a donné lieu à une plus grande précision. Le logiciel a obtenu de très bons résultats pour les points de tissus durs, mais sa précision a diminué pour les tissus mous et dentaires. Nous concluons que cette technologie est très prometteuse pour une application en milieu clinique sous la supervision du docteur.Introduction: The objective of this study is to compare the difference between manual cephalometric analysis and automatic analysis by artificial intelligence to confirm the reliability of the latter. Our research hypothesis is that the manual technique is the most reliable of the methods and is still considered the gold standard. Method: A total of 99 lateral cephalometric radiographs were collected in this study. Manual technique (MT) and automatic localization by artificial intelligence (AI) tracings were performed for all radiographs. The localization of 29 commonly used landmarks were compared between both groups. Mean radial error (MRE) and a successful detection rate (SDR) of 2mm were used to compare both groups. AudaxCeph software version 6.2.57.4225 (Audax d.o.o., Ljubljana, Slovenia) was used for both manual and AI analysis. Results: The MRE and SDR for the inter-examinator reliability test were 0.87 ± 0.61mm and 95% respectively. For the comparison between the manual technique MT and landmarking with artificial intelligence AI, the MRE and SDR for all landmarks were 1.48 ± 1.42mm and 78% respectively. When dental landmarks are excluded, the MRE decreases to 1.33 ± 1.39mm and the SDR increases to 84%. When only hard tissue landmarks are included (excluding soft tissue and dental points) the MRE decreases further to 1.25 ± 1.09mm and the SDR increases to 85%. When only soft tissue landmarks are included the MRE increases to 1.68 ± 1.89mm and the SDR decreases to 78%. Conclusion: The software performed similarly to what was previously reported in literature for software that use analogous modeling framework. Comparing the software’s landmarking to manual landmarking our results reveal that the manual landmarking resulted in higher accuracy. The software operated very well for hard tissue points, but its accuracy went down for soft and dental tissue. Our conclusion is this technology shows great promise for application in clinical settings under the doctor’s supervision

    Effect of Patient Clinical Variables in Osteoporosis Classification Using Hip X-rays in Deep Learning Analysis

    Get PDF
    Background and Objectives: A few deep learning studies have reported that combining image features with patient variables enhanced identification accuracy compared with image-only models. However, previous studies have not statistically reported the additional effect of patient variables on the image-only models. This study aimed to statistically evaluate the osteoporosis identification ability of deep learning by combining hip radiographs with patient variables. Materials andMethods: We collected a dataset containing 1699 images from patients who underwent skeletal-bone-mineral density measurements and hip radiography at a general hospital from 2014 to 2021. Osteoporosis was assessed from hip radiographs using convolutional neural network (CNN) models (ResNet18, 34, 50, 101, and 152). We also investigated ensemble models with patient clinical variables added to each CNN. Accuracy, precision, recall, specificity, F1 score, and area under the curve (AUC) were calculated as performance metrics. Furthermore, we statistically compared the accuracy of the image-only model with that of an ensemble model that included images plus patient factors, including effect size for each performance metric. Results: All metrics were improved in the ResNet34 ensemble model compared with the image-only model. The AUC score in the ensemble model was significantly improved compared with the image-only model (difference 0.004; 95% CI 0.002-0.0007; p = 0.0004, effect size: 0.871). Conclusions: This study revealed the additional effect of patient variables in identification of osteoporosis using deep CNNs with hip radiographs. Our results provided evidence that the patient variables had additive synergistic effects on the image in osteoporosis identification

    Understanding deep learning - challenges and prospects

    Get PDF
    The developments in Artificial Intelligence have been on the rise since its advent. The advancements in this field have been the innovative research area across a wide range of industries, making its incorporation in dentistry inevitable. Artificial Intelligence techniques are making serious progress in the diagnostic and treatment planning aspects of dental clinical practice. This will ultimately help in the elimination of subjectivity and human error that are often part of radiographic interpretations, and will improve the overall efficiency of the process. The various types of Artificial Intelligence algorithms that exist today make the understanding of their application quite complex. The current narrative review was planned to make comprehension of Artificial Intelligence algorithms relatively straightforward. The focus was planned to be kept on the current developments and prospects of Artificial Intelligence in dentistry, especially Deep Learning and Convolutional Neural Networks in diagnostic imaging. The narrative review may facilitate the interpretation of seemingly perplexing research published widely in dental journals
    corecore