355 research outputs found

    Histogram- and Diffusion-Based Medical Out-of-Distribution Detection

    Full text link
    Out-of-distribution (OOD) detection is crucial for the safety and reliability of artificial intelligence algorithms, especially in the medical domain. In the context of the Medical OOD (MOOD) detection challenge 2023, we propose a pipeline that combines a histogram-based method and a diffusion-based method. The histogram-based method is designed to accurately detect homogeneous anomalies in the toy examples of the challenge, such as blobs with constant intensity values. The diffusion-based method is based on one of the latest methods for unsupervised anomaly detection, called DDPM-OOD. We explore this method and propose extensive post-processing steps for pixel-level and sample-level anomaly detection on brain MRI and abdominal CT data provided by the challenge. Our results show that the proposed DDPM method is sensitive to blur and bias field samples, but faces challenges with anatomical deformation, black slice, and swapped patches. These findings suggest that further research is needed to improve the performance of DDPM for OOD detection in medical images.Comment: 9 pages, 5 figures, submission to Medical Out-of-Distribution (MOOD) challenge at MICCAI 202

    Intensity-Based Registration of Freehand 3D Ultrasound and CT-scan Images of the Kidney

    Full text link
    This paper presents a method to register a pre-operative Computed-Tomography (CT) volume to a sparse set of intra-operative Ultra-Sound (US) slices. In the context of percutaneous renal puncture, the aim is to transfer planning information to an intra-operative coordinate system. The spatial position of the US slices is measured by optically localizing a calibrated probe. Assuming the reproducibility of kidney motion during breathing, and no deformation of the organ, the method consists in optimizing a rigid 6 Degree Of Freedom (DOF) transform by evaluating at each step the similarity between the set of US images and the CT volume. The correlation between CT and US images being naturally rather poor, the images have been preprocessed in order to increase their similarity. Among the similarity measures formerly studied in the context of medical image registration, Correlation Ratio (CR) turned out to be one of the most accurate and appropriate, particularly with the chosen non-derivative minimization scheme, namely Powell-Brent's. The resulting matching transforms are compared to a standard rigid surface registration involving segmentation, regarding both accuracy and repeatability. The obtained results are presented and discussed

    Real-Time Decoding of Brain Responses to Visuospatial Attention Using 7T fMRI

    Get PDF
    Brain-Computer interface technologies mean to create new communication channels between our mind and our environment, independent of the motor system, by detecting and classifying self regulation of local brain activity. BCIs can provide patients with severe paralysis a means to communicate and to live more independent lives. There has been a growing interest in using invasive recordings for BCI to improve the signal quality. This also potentially gives access to new control strategies previously inaccessible by non-invasive methods. However, before surgery, the best implantation site needs to be determined. The blood-oxygen-level dependent signal changes measured with fMRI have been shown to agree well spatially with those found with invasive electrodes, and are the best option for pre-surgical localization. We show, using real-time fMRI at 7T, that eye movement-independent visuospatial attention can be used as a reliable control strategy for BCIs. At this field strength even subtle signal changes can be detected in single trials thanks to the high contrast-to-noise ratio. A group of healthy subjects were instructed to move their attention between three (two peripheral and one central) spatial target regions while keeping their gaze fixated at the center. The activated regions were first located and thereafter the subjects were given real-time feedback based on the activity in these regions. All subjects managed to regulate local brain areas without training, which suggests that visuospatial attention is a promising new target for intracranial BCI. ECoG data recorded from one epilepsy patient showed that local changes in gamma-power can be used to separate the three classes

    Automated measurement of brain and white matter lesion volume in type 2 diabetes mellitus

    Get PDF
    Aims/hypothesis: Type 2 diabetes mellitus has been associated with brain atrophy and cognitive decline, but the association with ischaemic white matter lesions is unclear. Previous neuroimaging studies have mainly used semiquantitative rating scales to measure atrophy and white matter lesions (WMLs). In this study we used an automated segmentation technique to investigate the association of type 2 diabetes, several diabetes-related risk factors and cognition with cerebral tissue and WML volumes. Subjects and methods: Magnetic resonance images of 99 patients with type 2 diabetes and 46 control participants from a population-based sample were segmented using a k-nearest neighbour classifier trained on ten manually segmented data sets. White matter, grey matter, lateral ventricles, cerebrospinal fluid not including lateral ventricles, and WML volumes were assessed. Analyses were adjusted for age, sex, level of education and intracranial volume. Results: Type 2 diabetes was associated with a smaller volume of grey matter (-21.8 ml; 95% CI -34.2, -9.4) and with larger lateral ventricle volume (7.1 ml; 95% CI 2.3, 12.0) and with larger white matter lesion volume (56.5%; 95% CI 4.0, 135.8), whereas white matter volume was not affected. In separate analyses for men and women, the effects of diabetes were only significant in women. Conclusions/interpretation: The combination of atrophy with larger WML volume indicates that type 2 diabetes is associated with mixed pathology in the brain. The observed sex differences were unexpected and need to be addressed in further studies. © 2007 Springer-Verlag

    clDice -- a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

    Full text link
    Accurate segmentation of tubular, network-like structures, such as vessels, neurons, or roads, is relevant to many fields of research. For such structures, the topology is their most important characteristic; particularly preserving connectedness: in the case of vascular networks, missing a connected vessel entirely alters the blood-flow dynamics. We introduce a novel similarity measure termed centerlineDice (short clDice), which is calculated on the intersection of the segmentation masks and their (morphological) skeleta. We theoretically prove that clDice guarantees topology preservation up to homotopy equivalence for binary 2D and 3D segmentation. Extending this, we propose a computationally efficient, differentiable loss function (soft-clDice) for training arbitrary neural segmentation networks. We benchmark the soft-clDice loss on five public datasets, including vessels, roads and neurons (2D and 3D). Training on soft-clDice leads to segmentation with more accurate connectivity information, higher graph similarity, and better volumetric scores.Comment: * The authors Suprosanna Shit and Johannes C. Paetzold contributed equally to the wor

    Groupwise Multimodal Image Registration using Joint Total Variation

    Get PDF
    In medical imaging it is common practice to acquire a wide range of modalities (MRI, CT, PET, etc.), to highlight different structures or pathologies. As patient movement between scans or scanning session is unavoidable, registration is often an essential step before any subsequent image analysis. In this paper, we introduce a cost function based on joint total variation for such multimodal image registration. This cost function has the advantage of enabling principled, groupwise alignment of multiple images, whilst being insensitive to strong intensity non-uniformities. We evaluate our algorithm on rigidly aligning both simulated and real 3D brain scans. This validation shows robustness to strong intensity non-uniformities and low registration errors for CT/PET to MRI alignment. Our implementation is publicly available at https://github.com/brudfors/coregistration-njtv

    Aortic dissection type I in a weightlifter with hypertension: A case report

    Get PDF
    Acute aortic dissection can occur at the time of intense physical exertion in strength-trained athletes like weightlifters, bodybuilders, throwers, and wrestlers

    Deep learning-based recognition of key anatomical structures during robot-assisted minimally invasive esophagectomy

    Get PDF
    Objective: To develop a deep learning algorithm for anatomy recognition in thoracoscopic video frames from robot-assisted minimally invasive esophagectomy (RAMIE) procedures using deep learning. Background: RAMIE is a complex operation with substantial perioperative morbidity and a considerable learning curve. Automatic anatomy recognition may improve surgical orientation and recognition of anatomical structures and might contribute to reducing morbidity or learning curves. Studies regarding anatomy recognition in complex surgical procedures are currently lacking. Methods: Eighty-three videos of consecutive RAMIE procedures between 2018 and 2022 were retrospectively collected at University Medical Center Utrecht. A surgical PhD candidate and an expert surgeon annotated the azygos vein and vena cava, aorta, and right lung on 1050 thoracoscopic frames. 850 frames were used for training of a convolutional neural network (CNN) to segment the anatomical structures. The remaining 200 frames of the dataset were used for testing the CNN. The Dice and 95% Hausdorff distance (95HD) were calculated to assess algorithm accuracy. Results: The median Dice of the algorithm was 0.79 (IQR = 0.20) for segmentation of the azygos vein and/or vena cava. A median Dice coefficient of 0.74 (IQR = 0.86) and 0.89 (IQR = 0.30) were obtained for segmentation of the aorta and lung, respectively. Inference time was 0.026 s (39 Hz). The prediction of the deep learning algorithm was compared with the expert surgeon annotations, showing an accuracy measured in median Dice of 0.70 (IQR = 0.19), 0.88 (IQR = 0.07), and 0.90 (0.10) for the vena cava and/or azygos vein, aorta, and lung, respectively. Conclusion: This study shows that deep learning-based semantic segmentation has potential for anatomy recognition in RAMIE video frames. The inference time of the algorithm facilitated real-time anatomy recognition. Clinical applicability should be assessed in prospective clinical studies.</p

    Deep Learning-Based Grading of Ductal Carcinoma In Situ in Breast Histopathology Images

    Get PDF
    Ductal carcinoma in situ (DCIS) is a non-invasive breast cancer that can progress into invasive ductal carcinoma (IDC). Studies suggest DCIS is often overtreated since a considerable part of DCIS lesions may never progress into IDC. Lower grade lesions have a lower progression speed and risk, possibly allowing treatment de-escalation. However, studies show significant inter-observer variation in DCIS grading. Automated image analysis may provide an objective solution to address high subjectivity of DCIS grading by pathologists. In this study, we developed a deep learning-based DCIS grading system. It was developed using the consensus DCIS grade of three expert observers on a dataset of 1186 DCIS lesions from 59 patients. The inter-observer agreement, measured by quadratic weighted Cohen's kappa, was used to evaluate the system and compare its performance to that of expert observers. We present an analysis of the lesion-level and patient-level inter-observer agreement on an independent test set of 1001 lesions from 50 patients. The deep learning system (dl) achieved on average slightly higher inter-observer agreement to the observers (o1, o2 and o3) (κo1,dl=0.81,κo2,dl=0.53,κo3,dl=0.40\kappa_{o1,dl}=0.81, \kappa_{o2,dl}=0.53, \kappa_{o3,dl}=0.40) than the observers amongst each other (κo1,o2=0.58,κo1,o3=0.50,κo2,o3=0.42\kappa_{o1,o2}=0.58, \kappa_{o1,o3}=0.50, \kappa_{o2,o3}=0.42) at the lesion-level. At the patient-level, the deep learning system achieved similar agreement to the observers (κo1,dl=0.77,κo2,dl=0.75,κo3,dl=0.70\kappa_{o1,dl}=0.77, \kappa_{o2,dl}=0.75, \kappa_{o3,dl}=0.70) as the observers amongst each other (κo1,o2=0.77,κo1,o3=0.75,κo2,o3=0.72\kappa_{o1,o2}=0.77, \kappa_{o1,o3}=0.75, \kappa_{o2,o3}=0.72). In conclusion, we developed a deep learning-based DCIS grading system that achieved a performance similar to expert observers. We believe this is the first automated system that could assist pathologists by providing robust and reproducible second opinions on DCIS grade
    • …
    corecore