527,766 research outputs found

    Automated 5-year Mortality Prediction using Deep Learning and Radiomics Features from Chest Computed Tomography

    Full text link
    We propose new methods for the prediction of 5-year mortality in elderly individuals using chest computed tomography (CT). The methods consist of a classifier that performs this prediction using a set of features extracted from the CT image and segmentation maps of multiple anatomic structures. We explore two approaches: 1) a unified framework based on deep learning, where features and classifier are automatically learned in a single optimisation process; and 2) a multi-stage framework based on the design and selection/extraction of hand-crafted radiomics features, followed by the classifier learning process. Experimental results, based on a dataset of 48 annotated chest CTs, show that the deep learning model produces a mean 5-year mortality prediction accuracy of 68.5%, while radiomics produces a mean accuracy that varies between 56% to 66% (depending on the feature selection/extraction method and classifier). The successful development of the proposed models has the potential to make a profound impact in preventive and personalised healthcare.Comment: 9 page

    Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking

    Full text link
    Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data by including all shifted versions of a training sample. However, the underlying DCF formulation is restricted to single-resolution feature maps, significantly limiting its potential. In this paper, we go beyond the conventional DCF framework and introduce a novel formulation for training continuous convolution filters. We employ an implicit interpolation model to pose the learning problem in the continuous spatial domain. Our proposed formulation enables efficient integration of multi-resolution deep feature maps, leading to superior results on three object tracking benchmarks: OTB-2015 (+5.1% in mean OP), Temple-Color (+4.6% in mean OP), and VOT2015 (20% relative reduction in failure rate). Additionally, our approach is capable of sub-pixel localization, crucial for the task of accurate feature point tracking. We also demonstrate the effectiveness of our learning formulation in extensive feature point tracking experiments. Code and supplementary material are available at http://www.cvl.isy.liu.se/research/objrec/visualtracking/conttrack/index.html.Comment: Accepted at ECCV 201

    Real-time, acquisition parameter-free voxel-wise patient-specific Monte Carlo dose reconstruction in whole-body CT scanning using deep neural networks

    Get PDF
    Objective: We propose a deep learning-guided approach to generate voxel-based absorbed dose maps from whole-body CT acquisitions. Methods: The voxel-wise dose maps corresponding to each source position/angle were calculated using Monte Carlo (MC) simulations considering patient- and scanner-specific characteristics (SP_MC). The dose distribution in a uniform cylinder was computed through MC calculations (SP_uniform). The density map and SP_uniform dose maps were fed into a residual deep neural network (DNN) to predict SP_MC through an image regression task. The whole-body dose maps reconstructed by the DNN and MC were compared in the 11 test cases scanned with two tube voltages through transfer learning with/without tube current modulation (TCM). The voxel-wise and organ-wise dose evaluations, such as mean error (ME, mGy), mean absolute error (MAE, mGy), relative error (RE, %), and relative absolute error (RAE, %), were performed. Results: The model performance for the 120 kVp and TCM test set in terms of ME, MAE, RE, and RAE voxel-wise parameters was - 0.0302 ± 0.0244 mGy, 0.0854 ± 0.0279 mGy, - 1.13 ± 1.41%, and 7.17 ± 0.44%, respectively. The organ-wise errors for 120 kVp and TCM scenario averaged over all segmented organs in terms of ME, MAE, RE, and RAE were - 0.144 ± 0.342 mGy, and 0.23 ± 0.28 mGy, - 1.11 ± 2.90%, 2.34 ± 2.03%, respectively. Conclusion: Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy suitable for organ-level absorbed dose estimation. Clinical relevance statement: We proposed a novel method for voxel dose map calculation using deep neural networks. This work is clinically relevant since accurate dose calculation for patients can be carried out within acceptable computational time compared to lengthy Monte Carlo calculations. Key points: • We proposed a deep neural network approach as an alternative to Monte Carlo dose calculation. • Our proposed deep learning model is able to generate voxel-level dose maps from a whole-body CT scan with reasonable accuracy, suitable for organ-level dose estimation. • By generating a dose distribution from a single source position, our model can generate accurate and personalized dose maps for a wide range of acquisition parameters

    SaRF: Saliency regularized feature learning improves MRI sequence classification.

    Get PDF
    BACKGROUND AND OBJECTIVE Deep learning based medical image analysis technologies have the potential to greatly improve the workflow of neuro-radiologists dealing routinely with multi-sequence MRI. However, an essential step for current deep learning systems employing multi-sequence MRI is to ensure that their sequence type is correctly assigned. This requirement is not easily satisfied in clinical practice and is subjected to protocol and human-prone errors. Although deep learning models are promising for image-based sequence classification, robustness, and reliability issues limit their application to clinical practice. METHODS In this paper, we propose a novel method that uses saliency information to guide the learning of features for sequence classification. The method uses two self-supervised loss terms to first enhance the distinctiveness among class-specific saliency maps and, secondly, to promote similarity between class-specific saliency maps and learned deep features. RESULTS On a cohort of 2100 patient cases comprising six different MR sequences per case, our method shows an improvement in mean accuracy by 4.4% (from 0.935 to 0.976), mean AUC by 1.2% (from 0.9851 to 0.9968), and mean F1 score by 20.5% (from 0.767 to 0.924). Furthermore, based on feedback from an expert neuroradiologist, we show that the proposed approach improves the interpretability of trained models as well as their calibration with reduced expected calibration error (by 30.8%, from 0.065 to 0.045). The code will be made publicly available. CONCLUSIONS In this paper, the proposed method shows an improvement in accuracy, AUC, and F1 score, as well as improved calibration and interpretability of resulting saliency maps
    • …
    corecore