455 research outputs found
Phenotyping the histopathological subtypes of non-small-cell lung carcinoma: how beneficial is radiomics?
The aim of this study was to investigate the usefulness of radiomics in the absence of well-defined standard guidelines. Specifically, we extracted radiomics features from multicenter computed tomography (CT) images to differentiate between the four histopathological subtypes of non-small-cell lung carcinoma (NSCLC). In addition, the results that varied with the radiomics model were compared. We investigated the presence of the batch effects and the impact of feature harmonization on the models' performance. Moreover, the question on how the training dataset composition influenced the selected feature subsets and, consequently, the model's performance was also investigated. Therefore, through combining data from the two publicly available datasets, this study involves a total of 152 squamous cell carcinoma (SCC), 106 large cell carcinoma (LCC), 150 adenocarcinoma (ADC), and 58 no other specified (NOS). Through the matRadiomics tool, which is an example of Image Biomarker Standardization Initiative (IBSI) compliant software, 1781 radiomics features were extracted from each of the malignant lesions that were identified in CT images. After batch analysis and feature harmonization, which were based on the ComBat tool and were integrated in matRadiomics, the datasets (the harmonized and the non-harmonized) were given as an input to a machine learning modeling pipeline. The following steps were articulated: (i) training-set/test-set splitting (80/20); (ii) a Kruskal-Wallis analysis and LASSO linear regression for the feature selection; (iii) model training; (iv) a model validation and hyperparameter optimization; and (v) model testing. Model optimization consisted of a 5-fold cross-validated Bayesian optimization, repeated ten times (inner loop). The whole pipeline was repeated 10 times (outer loop) with six different machine learning classification algorithms. Moreover, the stability of the feature selection was evaluated. Results showed that the batch effects were present even if the voxels were resampled to an isotropic form and whether feature harmonization correctly removed them, even though the models' performances decreased. Moreover, the results showed that a low accuracy (61.41%) was reached when differentiating between the four subtypes, even though a high average area under curve (AUC) was reached (0.831). Further, a NOS subtype was classified as almost completely correct (true positive rate similar to 90%). The accuracy increased (77.25%) when only the SCC and ADC subtypes were considered, as well as when a high AUC (0.821) was obtained-although harmonization decreased the accuracy to 58%. Moreover, the features that contributed the most to models' performance were those extracted from wavelet decomposed and Laplacian of Gaussian (LoG) filtered images and they belonged to the texture feature class.. In conclusion, we showed that our multicenter data were affected by batch effects, that they could significantly alter the models' performance, and that feature harmonization correctly removed them. Although wavelet features seemed to be the most informative features, an absolute subset could not be identified since it changed depending on the training/testing splitting. Moreover, performance was influenced by the chosen dataset and by the machine learning methods, which could reach a high accuracy in binary classification tasks, but could underperform in multiclass problems.It is, therefore, essential that the scientific community propose a more systematic radiomics approach, focusing on multicenter studies, with clear and solid guidelines to facilitate the translation of radiomics to clinical practice
Deep D-Bar: Real-Time Electrical Impedance Tomography Imaging With Deep Neural Networks
The mathematical problem for electrical impedance tomography (EIT) is a highly nonlinear ill-posed inverse problem requiring carefully designed reconstruction procedures to ensure reliable image generation. D-bar methods are based on a rigorous mathematical analysis and provide robust direct reconstructions by using a low-pass filtering of the associated nonlinear Fourier data. Similarly to low-pass filtering of linear Fourier data, only using low frequencies in the image recovery process results in blurred images lacking sharp features, such as clear organ boundaries. Convolutional neural networks provide a powerful framework for post-processing such convolved direct reconstructions. In this paper, we demonstrate that these CNN techniques lead to sharp and reliable reconstructions even for the highly nonlinear inverse problem of EIT. The network is trained on data sets of simulated examples and then applied to experimental data without the need to perform an additional transfer training. Results for absolute EIT images are presented using experimental EIT data from the ACT4 and KIT4 EIT systems
Quantitative imaging in radiation oncology
Artificially intelligent eyes, built on machine and deep learning technologies, can empower our capability of analysing patients’ images. By revealing information invisible at our eyes, we can build decision aids that help our clinicians to provide more effective treatment, while reducing side effects. The power of these decision aids is to be based on patient tumour biologically unique properties, referred to as biomarkers. To fully translate this technology into the clinic we need to overcome barriers related to the reliability of image-derived biomarkers, trustiness in AI algorithms and privacy-related issues that hamper the validation of the biomarkers. This thesis developed methodologies to solve the presented issues, defining a road map for the responsible usage of quantitative imaging into the clinic as decision support system for better patient care
Recommended from our members
Tumour grading and discrimination based on class assignment and quantitative texture analysis techniques
Medical imaging represents the utilisation of technology in biology for the purpose of noninvasively revealing the internal structure of the organs of the human body. It is a way to improve the quality of the patient's life through a more precise and rapid diagnosis, and with limited side-effects, leading to an effective overall treatment procedure. The main objective of this thesis is to propose novel tumour discrimination techniques that cover both micro and macro-scale textures encountered in computed tomography (CI') and digital microscopy (DM) modalities, respectively. Image texture can provide significant information on the (ab)normality of tissue, and this thesis expands this idea to tumour texture grading and classification. The fractal dimension (FO) as a texture measure was applied to contrast enhanced CT lung tumour images in an aim to improve tumour grading accuracy from conventional CI' modality, and quantitative performance analysis showed an accuracy of 83.30% in distinguishing between advanced (aggressive) and early stage (non-aggressive) malignant tumours. A different approach was adopted for subtype discrimination of brain tumour OM images via a set of statistical and model-based texture analysis algorithms. The combined Gaussian Markov random field and run-length matrix texture measures outperformed all other combinations, achieving an overall class assignment classification accuracy of 92.50%. Also two new histopathological multi resolution approaches based on applying the FO as the best bases selection for discrete wavelet packet transform, and when fused with the Gabor filters' energy output improved the accuracy to 91.25% and 95.00%, respectively. While noise is quite common in all medical imaging modalities, the impact of noise on the applied texture measures was assessed as well. The developed lung and brain texture analysis techniques can improve the physician's ability to detect and analyse pathologies leading for a more reliable diagnosis and treatment of disease
Quantitative ultrasound texture analysis of fetal lungs to predict neonatal respiratory morbidity
Objective
To develop and evaluate the performance of a novel method for predicting neonatal respiratory morbidity based on quantitative analysis of the fetal lung by ultrasound.
Methods
More than 13¿000 non-clinical images and 900 fetal lung images were used to develop a computerized method based on texture analysis and machine learning algorithms, trained to predict neonatal respiratory morbidity risk on fetal lung ultrasound images. The method, termed ‘quantitative ultrasound fetal lung maturity analysis’ (quantusFLM™), was then validated blindly in 144 neonates, delivered at 28¿+¿0 to 39¿+¿0¿weeks' gestation. Lung ultrasound images in DICOM format were obtained within 48¿h of delivery and the ability of the software to predict neonatal respiratory morbidity, defined as either respiratory distress syndrome or transient tachypnea of the newborn, was determined.
Results
Mean (SD) gestational age at delivery was 36¿+¿1 (3¿+¿3) weeks. Among the 144 neonates, there were 29 (20.1%) cases of neonatal respiratory morbidity. Quantitative texture analysis predicted neonatal respiratory morbidity with a sensitivity, specificity, positive predictive value and negative predictive value of 86.2%, 87.0%, 62.5% and 96.2%, respectively.
Conclusions
Quantitative ultrasound fetal lung maturity analysis predicted neonatal respiratory morbidity with an accuracy comparable to that of current tests using amniotic fluid.Peer ReviewedPostprint (published version
Towards a low complexity scheme for medical images in scalable video coding
Medical imaging has become of vital importance for diagnosing diseases and conducting noninvasive procedures. Advances in eHealth applications are challenged by the fact that Digital Imaging and Communications in Medicine (DICOM) requires high-resolution images, thereby increasing their size and the associated computational complexity, particularly when these images are communicated over IP and wireless networks. Therefore, medical research requires an efficient coding technique to achieve high-quality and low-complexity images with error-resilient features. In this study, we propose an improved coding scheme that exploits the content features of encoded videos with low complexity combined with flexible macroblock ordering for error resilience. We identify the homogeneous region in which the search for optimal macroblock modes is early terminated. For non-homogeneous regions, the integration of smaller blocks is employed only if the vector difference is less than the threshold. Results confirm that the proposed technique achieves a considerable performance improvement compared with existing schemes in terms of reducing the computational complexity without compromising the bit-rate and peak signal-to-noise ratio. © 2013 IEEE
Standardised convolutional filtering for radiomics
The Image Biomarker Standardisation Initiative (IBSI) aims to improve
reproducibility of radiomics studies by standardising the computational process
of extracting image biomarkers (features) from images. We have previously
established reference values for 169 commonly used features, created a standard
radiomics image processing scheme, and developed reporting guidelines for
radiomic studies. However, several aspects are not standardised.
Here we present a preliminary version of a reference manual on the use of
convolutional image filters in radiomics. Filters, such as wavelets or
Laplacian of Gaussian filters, play an important part in emphasising specific
image characteristics such as edges and blobs. Features derived from filter
response maps have been found to be poorly reproducible. This reference manual
forms the basis of ongoing work on standardising convolutional filters in
radiomics, and will be updated as this work progresses.Comment: 62 pages. For additional information see https://theibsi.github.io
- …