559 research outputs found
Ultrasound-guided Optical Techniques for Cancer Diagnosis: System and Algorithm Development
Worldwide, breast cancer is the most common cancer among women. In the United States alone, the American cancer society has estimated there will be 271,270 new breast cancer cases in 2019, and 42,260 lives will be lost to the disease. Ultrasound (US), mammography, and magnetic resonance imaging (MRI) are regularly used for breast cancer diagnosis and therapy monitoring. However, they sometimes fail to diagnose breast cancer effectively. These shortcomings have motivated researchers to explore new modalities. One of these modalities, diffuse optical tomography (DOT), utilizes near-infrared (NIR) light to reveal the optical properties of tissue. NIR-based DOT images the contrast between a suspected lesion’s location and the background tissue, caused by the higher NIR absorption of the hemoglobin which characterizes tumors. The limitation of high light scattering inside tissue is minimized by using ultrasound image to find the tumor location.
This thesis focuses on developing a compact, low-cost ultrasound guided diffuse optical tomography imaging system and on improving optical image reconstruction by extracting the tumor’s location and size from co-registered ultrasound images. Several electronic components have been redesigned and optimized to save space and cost and to improve the user experience. In terms of software and algorithm development, manual extraction of tumor information from ultrasound images has been replaced by using a semi-automated ultrasound image segmentation algorithm that reduces the optical image reconstruction time and operator dependency. This system and algorithm have been validated with phantom and clinical data and have demonstrated their efficacy. An ongoing clinical trial will continue to gather more patient data to improve the robustness of the imaging algorithm.
Another part of this research focuses on ovarian cancer diagnosis. Ovarian cancer is the most deadly of all gynecological cancers, with a less than 50% five-year survival rate. This cancer can evolve without any noticeable symptom, which makes it difficult to diagnose in an early stage. Although ultrasound-guided photoacoustic tomography (PAT) has demonstrated potential for early detection of ovarian cancer, clinical studies have been very limited due to the lack of robust PAT systems.
In this research, we have customized a commercial ultrasound system to obtain real-time co-registered PAT and US images. This system was validated with several phantom studies before use in a clinical trial. PAT and US raw data from 30 ovarian cancer patients was used to extract spectral and statistical features for training and testing classifiers for automatic diagnosis. For some challenging cases, the region of interest selection was improved by reconstructing co-registered Doppler images. This study will be continued in order to obtain quantitative tissue properties using US-guided PAT
Multi Modality Brain Mapping System (MBMS) Using Artificial Intelligence and Pattern Recognition
A Multimodality Brain Mapping System (MBMS), comprising one or more scopes (e.g., microscopes or endoscopes) coupled to one or more processors, wherein the one or more processors obtain training data from one or more first images and/or first data, wherein one or more abnormal regions and one or more normal regions are identified; receive a second image captured by one or more of the scopes at a later time than the one or more first images and/or first data and/or captured using a different imaging technique; and generate, using machine learning trained using the training data, one or more viewable indicators identifying one or abnormalities in the second image, wherein the one or more viewable indicators are generated in real time as the second image is formed. One or more of the scopes display the one or more viewable indicators on the second image
Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases
Cardiothoracic and pulmonary diseases are a significant cause of mortality and morbidity worldwide. The COVID-19 pandemic has highlighted the lack of access to clinical care, the overburdened medical system, and the potential of artificial intelligence (AI) in improving medicine. There are a variety of diseases affecting the cardiopulmonary system including lung cancers, heart disease, tuberculosis (TB), etc., in addition to COVID-19-related diseases. Screening, diagnosis, and management of cardiopulmonary diseases has become difficult owing to the limited availability of diagnostic tools and experts, particularly in resource-limited regions. Early screening, accurate diagnosis and staging of these diseases could play a crucial role in treatment and care, and potentially aid in reducing mortality. Radiographic imaging methods such as computed tomography (CT), chest X-rays (CXRs), and echo ultrasound (US) are widely used in screening and diagnosis. Research on using image-based AI and machine learning (ML) methods can help in rapid assessment, serve as surrogates for expert assessment, and reduce variability in human performance. In this Special Issue, “Artificial Intelligence in Image-Based Screening, Diagnostics, and Clinical Care of Cardiopulmonary Diseases”, we have highlighted exemplary primary research studies and literature reviews focusing on novel AI/ML methods and their application in image-based screening, diagnosis, and clinical management of cardiopulmonary diseases. We hope that these articles will help establish the advancements in AI
Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels
BACKGROUND: Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images. METHODS: We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. RESULTS: The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively. CONCLUSION: The method demonstrates promising results in the segmentation of brain tumour. Adding features from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management
Brain image clustering by wavelet energy and CBSSO optimization algorithm
Previously, the diagnosis of brain abnormality was significantly important in the saving of social and hospital resources. Wavelet energy is known as an effective feature detection which has great efficiency in different utilities. This paper suggests a new method based on wavelet energy to automatically classify magnetic resonance imaging (MRI) brain images into two groups (normal and abnormal), utilizing support vector machine (SVM) classification based on chaotic binary shark smell optimization (CBSSO) to optimize the SVM weights.
The results of the suggested CBSSO-based KSVM are compared favorably to several other methods in terms of better sensitivity and authenticity. The proposed CAD system can additionally be utilized to categorize the images with various pathological conditions, types, and illness modes
Recommended from our members
Deep learning assisted MRI guided attenuation correction in PET
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPositron emission tomography (PET) is a unique imaging modality that provides physiological
and functional details of the tissue at the molecular level. However, the acquired PET images
have some limitations such as the attenuation. PET attenuation correction is an essential step to
obtain the full potential of PET quantification. With the wide use of hybrid PET/MR scanners,
magnetic resonance (MR) images are used to address the problem of PET attenuation correction.
The MR images segmentation is one simple and robust approach to create pseudo computed
tomography (CT) images, which are used to generate attenuation coefficient maps to correct the
PET attenuation. Recently, deep learning has been proposed and used as a promising technique
to efficiently perform MR and various medical images segmentation.
In this research work, deep learning guided segmentation approaches have been proposed
to enhance the bone class segmentation of MR brain images in order to generate accurate
pseudo-CT images. The first approach has introduced the combination of handcrafted features
with deep learning features to enrich the set of features. Multiresolution analysis techniques,
which generate multiscale and multidirectional coefficients of an image such as contourlet and
shearlet transforms, are applied and combined with deep convolutional neural network (CNN)
features. Different experiments have been conducted to investigate the number of selected
coefficients and the insertion location of the handcrafted features.
The second approach aims at reducing the segmentation algorithm’s complexity while
maintaining the segmentation performance. An attention based convolutional encode-decoder
network has been proposed to adaptively recalibrate the deep network features. This attention based
network consists of two different squeeze and excitation blocks that excite the features
spatially and channel wise. The two blocks are combined sequentially to decrease the number
of network’s parameters and reduces the model complexity. The third approach has been focuses on the application of transfer learning from different MR sequences such as T1 weighted (T1-w) and T2 weighted (T2-w) images. A
pretrained model with T1-w MR sequences is fine tuned to perform the segmentation of T2-w
images. Multiple fine tuning approaches and experiments have been conducted to study the best
fine tuning mechanism that is able to build an efficient segmentation model for both T1-w and
T2-w segmentation. Clinical datasets of fifty patients with different conditions and diagnosis have been
used to carry an objective evaluation to measure the segmentation performance of the results
obtained by the three proposed methods. The first and second approaches have been validated
with other studies in the literature that applied deep network based segmentation technique to
perform MR based attenuation correction for PET images. The proposed methods have shown
an enhancement in the bone segmentation with an increase of dice similarity coefficient (DSC)
from 0.6179 to 0.6567 using an ensemble of CNNs with an improvement percentage of 6.3%.
The proposed excitation-based CNN has decreased the model complexity by decreasing the
number of trainable parameters by more than 46% where less computing resources are required
to train the model. The proposed hybrid transfer learning method has shown its superiority to
build a multi-sequences (T1-w and T2-w) segmentation approach compared to other applied
transfer learning methods especially with the bone class where the DSC is increased from 0.3841
to 0.5393. Moreover, the hybrid transfer learning approach requires less computing time than
transfer learning using open and conservative fine tuning
Developing neuroimaging biomarkers of blast-induced traumatic brain injury
In the past two decades, the awareness of the physical and emotional effects and
sequalae of traumatic brain injuries (TBI) has grown considerably, especially in
the case of soldiers returning from their deployment in Iraq and Afghanistan, after
sustaining blast-induced TBI (bTBI). While the understanding of bTBI and how it
compares to civilian non-blast TBI is essential for proper prevention, diagnosis and
treatment, it is currently limited, especially in human in-vivo studies.
Developing neuroimaging biomarkers of bTBI is key in understanding primary blast
injury mechanism. I therefore investigated the patterns of white matter and grey
matter injuries that are specific to bTBI and aren¶t commonl\ seen in civilians Zho
suffered from head trauma using advanced neuroimaging techniques. However,
because of significant methodological issues and limitations, I developed and
tested a new pipeline capable of running the analysis of white matter abnormalities
in soldiers, called subject-specific diffusion segmentation (SSDS). I also used
standard methodologies to investigate changes at the level of the grey matter
structures, and more particularly the limbic system. Finally, I trained a machine
learning algorithm that builds decision trees with the aim of classifying between
patients with TBI and controls, and between different TBI mechanisms as an
example of what could potentially be applied in the context of bTBI.
I found three main neuroimaging biomarkers specific to bTBI. The first one is a
microstructural white matter abnormality at the level of the middle cerebellar
peduncle, characterized by a decrease of diffusivity measures. The second is also
a decrease in diffusivity properties, at the level of the white matter boundary, and
the third one is a loss of hippocampal volume, with no association to post-traumatic
stress disorder. Finally, I demonstrated that SSDS can be used in tandem with a
machine learning algorithm for potential diagnosis of TBI with high accuracy.
These findings provide mechanistic insights into bTBI and the effect of primary blast
injuries on the human brain. This work also identifies important neuroimaging
biomarkers that might facilitate prevention and diagnosis in soldiers who suffered from
bTBI.Open Acces
- …