15 research outputs found

    Grading Loss: A Fracture Grade-based Metric Loss for Vertebral Fracture Detection

    Full text link
    Osteoporotic vertebral fractures have a severe impact on patients' overall well-being but are severely under-diagnosed. These fractures present themselves at various levels of severity measured using the Genant's grading scale. Insufficient annotated datasets, severe data-imbalance, and minor difference in appearances between fractured and healthy vertebrae make naive classification approaches result in poor discriminatory performance. Addressing this, we propose a representation learning-inspired approach for automated vertebral fracture detection, aimed at learning latent representations efficient for fracture detection. Building on state-of-art metric losses, we present a novel Grading Loss for learning representations that respect Genant's fracture grading scheme. On a publicly available spine dataset, the proposed loss function achieves a fracture detection F1 score of 81.5%, a 10% increase over a naive classification baseline.Comment: To be presented at MICCAI 202

    Deep learning-based parameter mapping for joint relaxation and diffusion tensor MR Fingerprinting

    Full text link
    Magnetic Resonance Fingerprinting (MRF) enables the simultaneous quantification of multiple properties of biological tissues. It relies on a pseudo-random acquisition and the matching of acquired signal evolutions to a precomputed dictionary. However, the dictionary is not scalable to higher-parametric spaces, limiting MRF to the simultaneous mapping of only a small number of parameters (proton density, T1 and T2 in general). Inspired by diffusion-weighted SSFP imaging, we present a proof-of-concept of a novel MRF sequence with embedded diffusion-encoding gradients along all three axes to efficiently encode orientational diffusion and T1 and T2 relaxation. We take advantage of a convolutional neural network (CNN) to reconstruct multiple quantitative maps from this single, highly undersampled acquisition. We bypass expensive dictionary matching by learning the implicit physical relationships between the spatiotemporal MRF data and the T1, T2 and diffusion tensor parameters. The predicted parameter maps and the derived scalar diffusion metrics agree well with state-of-the-art reference protocols. Orientational diffusion information is captured as seen from the estimated primary diffusion directions. In addition to this, the joint acquisition and reconstruction framework proves capable of preserving tissue abnormalities in multiple sclerosis lesions

    MedShapeNet -- A Large-Scale Dataset of 3D Medical Shapes for Computer Vision

    Full text link
    Prior to the deep learning era, shape was commonly used to describe the objects. Nowadays, state-of-the-art (SOTA) algorithms in medical imaging are predominantly diverging from computer vision, where voxel grids, meshes, point clouds, and implicit surface models are used. This is seen from numerous shape-related publications in premier vision conferences as well as the growing popularity of ShapeNet (about 51,300 models) and Princeton ModelNet (127,915 models). For the medical domain, we present a large collection of anatomical shapes (e.g., bones, organs, vessels) and 3D models of surgical instrument, called MedShapeNet, created to facilitate the translation of data-driven vision algorithms to medical applications and to adapt SOTA vision algorithms to medical problems. As a unique feature, we directly model the majority of shapes on the imaging data of real patients. As of today, MedShapeNet includes 23 dataset with more than 100,000 shapes that are paired with annotations (ground truth). Our data is freely accessible via a web interface and a Python application programming interface (API) and can be used for discriminative, reconstructive, and variational benchmarks as well as various applications in virtual, augmented, or mixed reality, and 3D printing. Exemplary, we present use cases in the fields of classification of brain tumors, facial and skull reconstructions, multi-class anatomy completion, education, and 3D printing. In future, we will extend the data and improve the interfaces. The project pages are: https://medshapenet.ikim.nrw/ and https://github.com/Jianningli/medshapenet-feedbackComment: 16 page

    A convolutional neural network approach for abnormality detection in Wireless Capsule Endoscopy

    No full text
    In wireless capsule endoscopy (WCE), a swallowable miniature optical endoscope is used to transmit color images of the gastrointestinal tract. However, the number of images transmitted is large, taking a significant amount of the medical expert's time to review the scan. In this paper, we propose a technique to automate the abnormality detection in WCE images. We split the image into several patches and extract features pertaining to each block using a convolutional neural network (CNN) to increase their generality while overcoming the drawbacks of manually crafted features. We intend to exploit the importance of color information for the task. Experiments are performed to determine the optimal color space components for feature extraction and classifier design. We obtained an area under receiver-operating-characteristic (ROC) curve of approximately 0.8 on a dataset containing multiple abnormalities

    Deep learning-based parameter mapping for joint relaxation and diffusion tensor MR Fingerprinting

    No full text
    Magnetic Resonance Fingerprinting (MRF) enables the simultaneous quantification of multiple properties of biological tissues. It relies on a pseudo-random acquisition and the matching of acquired signal evolutions to a precomputed dictionary. However, the dictionary is not scalable to higher-parametric spaces, limiting MRF to the simultaneous mapping of only a small number of parameters (proton density, T1 and T2 in general). Inspired by diffusion-weighted SSFP imaging, we present a proof-of-concept of a novel MRF sequence with embedded diffusion-encoding gradients along all three axes to eciently encode orientational diffusion and T1 and T2 relaxation. We take advantage of a convolutional neural network (CNN) to reconstruct multiple quantitative maps from this single, highly undersampled acquisition. We bypass expensive dictionary matching by learning the implicit physical relationships between the spatiotemporal MRF data and the T1, T2 and diffusion tensor parameters. The predicted parameter maps and the derived scalar diffusion metrics agree well with state-of-the-art reference protocols. Orientational diffusion information is captured as seen from the estimated primary diffusion directions. In addition to this, the joint acquisition and reconstruction framework proves capable of preserving tissue abnormalities in multiple sclerosis lesions

    Learning residual motion correction for fast and robust 3D multiparametric MRI

    No full text
    Voluntary and involuntary patient motion is a major problem for data quality in clinical routine of Magnetic Resonance Imaging (MRI). It has been thoroughly investigated and, yet it still remains unresolved. In quantitative MRI, motion artifacts impair the entire temporal evolution of the magnetization and cause errors in parameter estimation. Here, we present a novel strategy based on residual learning for retrospective motion correction in fast 3D whole-brain multiparametric MRI. We propose a 3D multiscale convolutional neural network (CNN) that learns the non-linear relationship between the motion-affected quantitative parameter maps and the residual error to their motion-free reference. For supervised model training, despite limited data availability, we propose a physics-informed simulation to generate self-contained paired datasets from a priori motion-free data. We evaluate motion-correction performance of the proposed method for the example of 3D Quantitative Transient-state Imaging at 1.5T and 3T. We show the robustness of the motion correction for various motion regimes and demonstrate the generalization capabilities of the residual CNN in terms of real-motion in vivo data of healthy volunteers and clinical patient cases, including pediatric and adult patients with large brain lesions. Our study demonstrates that the proposed motion correction outperforms current state of the art, reliably providing a high, clinically relevant image quality for mild to pronounced patient movements. This has important implications in clinical setups where large amounts of motion affected data must be discarded as they are rendered diagnostically unusable

    VerSe: A Vertebrae Labelling and Segmentation Benchmark for Multi-detector CT Images

    Full text link
    Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision support systems for diagnosis, surgery planning, and population-based analysis of spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms tackling the labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the results of this evaluation and further investigate the performance variation at the vertebra level, scan level, and different fields of view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top-performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The VerSe content and code can be accessed at: https://github.com/anjany/verse
    corecore