37 research outputs found
Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients.
Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances
Calibrating Ensembles for Scalable Uncertainty Quantification in Deep Learning-based Medical Segmentation
Uncertainty quantification in automated image analysis is highly desired in
many applications. Typically, machine learning models in classification or
segmentation are only developed to provide binary answers; however, quantifying
the uncertainty of the models can play a critical role for example in active
learning or machine human interaction. Uncertainty quantification is especially
difficult when using deep learning-based models, which are the state-of-the-art
in many imaging applications. The current uncertainty quantification approaches
do not scale well in high-dimensional real-world problems. Scalable solutions
often rely on classical techniques, such as dropout, during inference or
training ensembles of identical models with different random seeds to obtain a
posterior distribution. In this paper, we show that these approaches fail to
approximate the classification probability. On the contrary, we propose a
scalable and intuitive framework to calibrate ensembles of deep learning models
to produce uncertainty quantification measurements that approximate the
classification probability. On unseen test data, we demonstrate improved
calibration, sensitivity (in two out of three cases) and precision when being
compared with the standard approaches. We further motivate the usage of our
method in active learning, creating pseudo-labels to learn from unlabeled
images and human-machine collaboration
Recommended from our members
Robustness of radiomic features in CT images with different slice thickness, comparing liver tumour and muscle
Abstract: Radiomic image features are becoming a promising non-invasive method to obtain quantitative measurements for tumour classification and therapy response assessment in oncological research. However, despite its increasingly established application, there is a need for standardisation criteria and further validation of feature robustness with respect to imaging acquisition parameters. In this paper, the robustness of radiomic features extracted from computed tomography (CT) images is evaluated for liver tumour and muscle, comparing the values of the features in images reconstructed with two different slice thicknesses of 2.0 mm and 5.0 mm. Novel approaches are presented to address the intrinsic dependencies of texture radiomic features, choosing the optimal number of grey levels and correcting for the dependency on volume. With the optimal values and corrections, feature values are compared across thicknesses to identify reproducible features. Normalisation using muscle regions is also described as an alternative approach. With either method, a large fraction of features (75â90%) was found to be highly robust (< 25% difference). The analyses were performed on a homogeneous CT dataset of 43 patients with hepatocellular carcinoma, and consistent results were obtained for both tumour and muscle tissue. Finally, recommended guidelines are included for radiomic studies using variable slice thickness
Integrating the OHIF Viewer into XNAT: Achievements, Challenges and Prospects for Quantitative Imaging Studies.
Purpose: XNAT is an informatics software platform to support imaging research, particularly in the context of large, multicentre studies of the type that are essential to validate quantitative imaging biomarkers. XNAT provides import, archiving, processing and secure distribution facilities for image and related study data. Until recently, however, modern data visualisation and annotation tools were lacking on the XNAT platform. We describe the background to, and implementation of, an integration of the Open Health Imaging Foundation (OHIF) Viewer into the XNAT environment. We explain the challenges overcome and discuss future prospects for quantitative imaging studies. Materials and methods: The OHIF Viewer adopts an approach based on the DICOM web protocol. To allow operation in an XNAT environment, a data-routing methodology was developed to overcome the mismatch between the DICOM and XNAT information models and a custom viewer panel created to allow navigation within the viewer between different XNAT projects, subjects and imaging sessions. Modifications to the development environment were made to allow developers to test new code more easily against a live XNAT instance. Major new developments focused on the creation and storage of regions-of-interest (ROIs) and included: ROI creation and editing tools for both contour- and mask-based regions; a "smart CT" paintbrush tool; the integration of NVIDIA's Artificial Intelligence Assisted Annotation (AIAA); the ability to view surface meshes, fractional segmentation maps and image overlays; and a rapid image reader tool aimed at radiologists. We have incorporated the OHIF microscopy extension and, in parallel, introduced support for microscopy session types within XNAT for the first time. Results: Integration of the OHIF Viewer within XNAT has been highly successful and numerous additional and enhanced tools have been created in a programme started in 2017 that is still ongoing. The software has been downloaded more than 3700 times during the course of the development work reported here, demonstrating the impact of the work. Conclusions: The OHIF open-source, zero-footprint web viewer has been incorporated into the XNAT platform and is now used at many institutions worldwide. Further innovations are envisaged in the near future
Expression of MALT1 oncogene in hematopoietic stem/progenitor cells recapitulates the pathogenesis of human lymphoma in mice
Chromosomal translocations involving the MALT1 gene are hallmarks of mucosa-associated lymphoid tissue (MALT) lymphoma. To date, targeting these translocations to mouse B cells has failed to reproduce human disease. Here, we induced MALT1 expression in mouse Sca1(+)Lin(-) hematopoietic stem/progenitor cells, which showed NF-ÎșB activation and early lymphoid priming, being selectively skewed toward B-cell differentiation. These cells accumulated in extranodal tissues and gave rise to clonal tumors recapitulating the principal clinical, biological, and molecular genetic features of MALT lymphoma. Deletion of p53 gene accelerated tumor onset and induced transformation of MALT lymphoma to activated B-cell diffuse large-cell lymphoma (ABC-DLBCL). Treatment of MALT1-induced lymphomas with a specific inhibitor of MALT1 proteolytic activity decreased cell viability, indicating that endogenous Malt1 signaling was required for tumor cell survival. Our study shows that human-like lymphomas can be modeled in mice by targeting MALT1 expression to hematopoietic stem/progenitor cells, demonstrating the oncogenic role of MALT1 in lymphomagenesis. Furthermore, this work establishes a molecular link between MALT lymphoma and ABC-DLBCL, and provides mouse models to test MALT1 inhibitors. Finally, our results suggest that hematopoietic stem/progenitor cells may be involved in the pathogenesis of human mature B-cell lymphomas
Recommended from our members
Robustness of radiomic features in CT images with different slice thickness, comparing liver tumour and muscle
Abstract: Radiomic image features are becoming a promising non-invasive method to obtain quantitative measurements for tumour classification and therapy response assessment in oncological research. However, despite its increasingly established application, there is a need for standardisation criteria and further validation of feature robustness with respect to imaging acquisition parameters. In this paper, the robustness of radiomic features extracted from computed tomography (CT) images is evaluated for liver tumour and muscle, comparing the values of the features in images reconstructed with two different slice thicknesses of 2.0 mm and 5.0 mm. Novel approaches are presented to address the intrinsic dependencies of texture radiomic features, choosing the optimal number of grey levels and correcting for the dependency on volume. With the optimal values and corrections, feature values are compared across thicknesses to identify reproducible features. Normalisation using muscle regions is also described as an alternative approach. With either method, a large fraction of features (75â90%) was found to be highly robust (< 25% difference). The analyses were performed on a homogeneous CT dataset of 43 patients with hepatocellular carcinoma, and consistent results were obtained for both tumour and muscle tissue. Finally, recommended guidelines are included for radiomic studies using variable slice thickness
Photoacoustic imaging radiomics in patient-derived xenografts: a study on feature sensitivity and model discrimination.
Funder: Mark Foundation For Cancer Research; doi: http://dx.doi.org/10.13039/100014599Funder: Cambridge Commonwealth, European and International TrustPhotoacoustic imaging is an increasingly popular method of exploring the tumour microenvironment, which can provide insight into tumour oxygenation status and potentially treatment response assessment. Currently, the measurements most commonly performed on such images are the mean and median of the pixel values of the tumour volumes of interest. We investigated expanding the set of measurements that can be extracted from these images by adding radiomic features. In particular, we found that Skewness was sensitive to differences between basal and luminal patient derived xenograft cancer models with an [Formula: see text] of 0.86, and that it was robust to variations in confounding factors such as reconstruction type and wavelength. We also built discriminant models with radiomic features that were correlated with the underlying tumour model and were independent from each other. We then ranked features by their importance in the model. Skewness was again found to be an important feature, as were 10th Percentile, Root Mean Squared, and several other texture-based features. In summary, this paper proposes a methodology to select radiomic features extracted from photoacoustic images that are robust to changes in acquisition and reconstruction parameters, and discusses features found to have discriminating power between the underlying tumour models in a pre-clinical dataset
Reproducibility of CT-based radiomic features against image resampling and perturbations for tumour and healthy kidney in renal cancer patients
Computed Tomography (CT) is widely used in oncology for morphological evaluation and diagnosis, commonly through visual assessments, often exploiting semi-automatic tools as well. Well-established automatic methods for quantitative imaging offer the opportunity to enrich the radiologist interpretation with a large number of radiomic features, which need to be highly reproducible to be used reliably in clinical practice. This study investigates feature reproducibility against noise, varying resolutions and segmentations (achieved by perturbing the regions of interest), in a CT dataset with heterogeneous voxel size of 98 renal cell carcinomas (RCCs) and 93 contralateral normal kidneys (CK). In particular, first order (FO) and second order texture features based on both 2D and 3D grey level co-occurrence matrices (GLCMs) were considered. Moreover, this study carries out a comparative analysis of three of the most commonly used interpolation methods, which need to be selected before any resampling procedure. Results showed that the Lanczos interpolation is the most effective at preserving original information in resampling, where the median slice resolution coupled with the native slice spacing allows the best reproducibility, with 94.6% and 87.7% of features, in RCC and CK, respectively. GLCMs show their maximum reproducibility when used at short distances
Recommended from our members
A pipeline to further enhance quality, integrity and reusability of the NCCID clinical data
The National COVID-19 Chest Imaging Database (NCCID) is a centralized UK database of thoracic imaging and corresponding clinical data. It is made available by the National Health Service Artificial Intelligence (NHS AI) Lab to support the development of machine learning tools focused on Coronavirus Disease 2019 (COVID-19). A bespoke cleaning pipeline for NCCID, developed by the NHSx, was introduced in 2021. We present an extension to the original cleaning pipeline for the clinical data of the database. It has been adjusted to correct additional systematic inconsistencies in the raw data such as patient sex, oxygen levels and date values. The most important changes will be discussed in this paper, whilst the code and further explanations are made publicly available on GitHub. The suggested cleaning will allow global users to work with more consistent data for the development of machine learning tools without being an expert. In addition, it highlights some of the challenges when working with clinical multi-center data and includes recommendation