82 research outputs found
Brain Tumor Classification, Segmentation, and Detection using Deep Learning - A Review
V.Vapnik in 1965 proposed Vector methods. Kimeldorf presented a technique for creating kernel space based on support vectors in 1971. Support Vector Machine (SVM) techniques were initially presented in the 1990s by V. Vapnik in the field of statistical learning. Since then, pattern recognition, natural language processing, image processing and other areas have seen extensive use of SVM. By converting non-linear sample space into linear space via a kernel approach, the algorithm's complexity is reduced. Image classification is a well-known issue in image processing. Predicting the input image categories using the features is the main objective of image classification. There are several different classifiers, including Artificial Neural Networks, Support Vector Machines, and Random Forests, Decision Forests, k-NNs (k Nearest Neighbors), and Adaptive Boost. SVM is one of the best techniques for categorizing any image or pattern. A common non-invasive technique used in the medical sector for the analysis, diagnosis, treatment of brain tissues is magnetic resonance imaging. When a brain tumor is discovered early, the patient's life can be saved by receiving the appropriate care. It becomes difficult to accurately identify tumors in the MRI slices, which requires fussy work.
Automated brain tumour detection and segmentation using superpixel-based extremely randomized trees in FLAIR MRI
PURPOSE: We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). METHODS: The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. RESULTS: The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively. CONCLUSIONS: This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management
Automated brain tumour identification using magnetic resonance imaging:a systematic review and meta-analysis
BACKGROUND: Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. METHODS: A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. RESULTS: Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. CONCLUSIONS: The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models
Quantification of tumour heterogenity in MRI
Cancer is the leading cause of death that touches us all, either directly or indirectly.
It is estimated that the number of newly diagnosed cases in the Netherlands will increase
to 123,000 by the year 2020. General Dutch statistics are similar to those in
the UK, i.e. over the last ten years, the age-standardised incidence rate1 has stabilised
at around 355 females and 415 males per 100,000. Figure 1 shows the cancer incidence
per gender. In the UK, the rise in lifetime risk of cancer is more than one in three and depends on many factors, including age, lifestyle and genetic makeup
Supervised learning-based multimodal MRI brain image analysis
Medical imaging plays an important role in clinical procedures related to cancer, such as diagnosis, treatment selection, and therapy response evaluation. Magnetic resonance imaging (MRI) is one of the most popular acquisition modalities which is widely used in brain tumour analysis and can be acquired with different acquisition protocols, e.g. conventional and advanced. Automated segmentation of brain tumours in MR images is a difficult task due to their high variation in size, shape and appearance. Although many studies have been conducted, it still remains a challenging task and improving accuracy of tumour segmentation is an ongoing field. The aim of this thesis is to develop a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from multimodal MRI images.
In this thesis, firstly, the whole brain tumour is segmented from fluid attenuated inversion recovery (FLAIR) MRI, which is commonly acquired in clinics. The segmentation is achieved using region-wise classification, in which regions are derived from superpixels. Several image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomised trees (ERT) classifies each superpixel into tumour and non-tumour. Secondly, the method is extended to 3D supervoxel based learning for segmentation and classification of tumour tissue subtypes in multimodal MRI brain images. Supervoxels are generated using the information across the multimodal MRI data set. This is then followed by a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue. The information from the advanced protocols of diffusion tensor imaging (DTI), i.e. isotropic (p) and anisotropic (q) components is also incorporated to the conventional MRI to improve segmentation accuracy. Thirdly, to further improve the segmentation of tumour tissue subtypes, the machine-learned features from fully convolutional neural network (FCN) are investigated and combined with hand-designed texton features to encode global information and local dependencies into feature representation. The score map with pixel-wise predictions is used as a feature map which is learned from multimodal MRI training dataset using the FCN. The machine-learned features, along with hand-designed texton features are then applied to random forests to classify each MRI image voxel into normal brain tissues and different parts of tumour.
The methods are evaluated on two datasets: 1) clinical dataset, and 2) publicly available Multimodal Brain Tumour Image Segmentation Benchmark (BRATS) 2013 and 2017 dataset. The experimental results demonstrate the high detection and segmentation performance of the
III
single modal (FLAIR) method. The average detection sensitivity, balanced error rate (BER) and the Dice overlap measure for the segmented tumour against the ground truth for the clinical data are 89.48%, 6% and 0.91, respectively; whilst, for the BRATS dataset, the corresponding evaluation results are 88.09%, 6% and 0.88, respectively. The corresponding results for the tumour (including tumour core and oedema) in the case of multimodal MRI method are 86%, 7%, 0.84, for the clinical dataset and 96%, 2% and 0.89 for the BRATS 2013 dataset. The results of the FCN based method show that the application of the RF classifier to multimodal MRI images using machine-learned features based on FCN and hand-designed features based on textons provides promising segmentations. The Dice overlap measure for automatic brain tumor segmentation against ground truth for the BRATS 2013 dataset is 0.88, 0.80 and 0.73 for complete tumor, core and enhancing tumor, respectively, which is competitive to the state-of-the-art methods. The corresponding results for BRATS 2017 dataset are 0.86, 0.78 and 0.66 respectively.
The methods demonstrate promising results in the segmentation of brain tumours. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management. In the experiments, texton has demonstrated its advantages of providing significant information to distinguish various patterns in both 2D and 3D spaces. The segmentation accuracy has also been largely increased by fusing information from multimodal MRI images. Moreover, a unified framework is present which complementarily integrates hand-designed features with machine-learned features to produce more accurate segmentation. The hand-designed features from shallow network (with designable filters) encode the prior-knowledge and context while the machine-learned features from a deep network (with trainable filters) learn the intrinsic features. Both global and local information are combined using these two types of networks that improve the segmentation accuracy
A Survey on Evolutionary Computation for Computer Vision and Image Analysis: Past, Present, and Future Trends
Computer vision (CV) is a big and important field
in artificial intelligence covering a wide range of applications.
Image analysis is a major task in CV aiming to extract, analyse
and understand the visual content of images. However, imagerelated
tasks are very challenging due to many factors, e.g., high
variations across images, high dimensionality, domain expertise
requirement, and image distortions. Evolutionary computation
(EC) approaches have been widely used for image analysis with
significant achievement. However, there is no comprehensive
survey of existing EC approaches to image analysis. To fill
this gap, this paper provides a comprehensive survey covering
all essential EC approaches to important image analysis tasks
including edge detection, image segmentation, image feature
analysis, image classification, object detection, and others. This
survey aims to provide a better understanding of evolutionary
computer vision (ECV) by discussing the contributions of different
approaches and exploring how and why EC is used for
CV and image analysis. The applications, challenges, issues, and
trends associated to this research field are also discussed and
summarised to provide further guidelines and opportunities for
future research
Computer-aided detection and diagnosis of breast cancer in 2D and 3D medical imaging through multifractal analysis
This Thesis describes the research work performed in the scope of a doctoral research program
and presents its conclusions and contributions. The research activities were carried on in the
industry with Siemens S.A. Healthcare Sector, in integration with a research team.
Siemens S.A. Healthcare Sector is one of the world biggest suppliers of products, services and
complete solutions in the medical sector. The company offers a wide selection of diagnostic
and therapeutic equipment and information systems. Siemens products for medical imaging and
in vivo diagnostics include: ultrasound, computer tomography, mammography, digital breast tomosynthesis,
magnetic resonance, equipment to angiography and coronary angiography, nuclear
imaging, and many others.
Siemens has a vast experience in Healthcare and at the beginning of this project it was strategically
interested in solutions to improve the detection of Breast Cancer, to increase its competitiveness
in the sector.
The company owns several patents related with self-similarity analysis, which formed the background
of this Thesis. Furthermore, Siemens intended to explore commercially the computer-
aided automatic detection and diagnosis eld for portfolio integration. Therefore, with the
high knowledge acquired by University of Beira Interior in this area together with this Thesis,
will allow Siemens to apply the most recent scienti c progress in the detection of the breast
cancer, and it is foreseeable that together we can develop a new technology with high potential.
The project resulted in the submission of two invention disclosures for evaluation in Siemens
A.G., two articles published in peer-reviewed journals indexed in ISI Science Citation Index,
two other articles submitted in peer-reviewed journals, and several international conference
papers. This work on computer-aided-diagnosis in breast led to innovative software and novel
processes of research and development, for which the project received the Siemens Innovation
Award in 2012.
It was very rewarding to carry on such technological and innovative project in a socially sensitive
area as Breast Cancer.No cancro da mama a deteção precoce e o diagnóstico correto são de extrema importância na
prescrição terapêutica e caz e e ciente, que potencie o aumento da taxa de sobrevivência à
doença. A teoria multifractal foi inicialmente introduzida no contexto da análise de sinal e a
sua utilidade foi demonstrada na descrição de comportamentos siológicos de bio-sinais e até
na deteção e predição de patologias. Nesta Tese, três métodos multifractais foram estendidos
para imagens bi-dimensionais (2D) e comparados na deteção de microcalci cações em mamogramas.
Um destes métodos foi também adaptado para a classi cação de massas da mama, em
cortes transversais 2D obtidos por ressonância magnética (RM) de mama, em grupos de massas
provavelmente benignas e com suspeição de malignidade. Um novo método de análise multifractal
usando a lacunaridade tri-dimensional (3D) foi proposto para classi cação de massas da
mama em imagens volumétricas 3D de RM de mama. A análise multifractal revelou diferenças
na complexidade subjacente às localizações das microcalci cações em relação aos tecidos normais,
permitindo uma boa exatidão da sua deteção em mamogramas. Adicionalmente, foram
extraídas por análise multifractal características dos tecidos que permitiram identi car os casos
tipicamente recomendados para biópsia em imagens 2D de RM de mama. A análise multifractal
3D foi e caz na classi cação de lesões mamárias benignas e malignas em imagens 3D de RM de
mama. Este método foi mais exato para esta classi cação do que o método 2D ou o método
padrão de análise de contraste cinético tumoral. Em conclusão, a análise multifractal fornece
informação útil para deteção auxiliada por computador em mamogra a e diagnóstico auxiliado
por computador em imagens 2D e 3D de RM de mama, tendo o potencial de complementar a
interpretação dos radiologistas
Advanced Computational Methods for Oncological Image Analysis
[Cancer is the second most common cause of death worldwide and encompasses highly variable clinical and biological scenarios. Some of the current clinical challenges are (i) early diagnosis of the disease and (ii) precision medicine, which allows for treatments targeted to specific clinical cases. The ultimate goal is to optimize the clinical workflow by combining accurate diagnosis with the most suitable therapies. Toward this, large-scale machine learning research can define associations among clinical, imaging, and multi-omics studies, making it possible to provide reliable diagnostic and prognostic biomarkers for precision oncology. Such reliable computer-assisted methods (i.e., artificial intelligence) together with clinicians’ unique knowledge can be used to properly handle typical issues in evaluation/quantification procedures (i.e., operator dependence and time-consuming tasks). These technical advances can significantly improve result repeatability in disease diagnosis and guide toward appropriate cancer care. Indeed, the need to apply machine learning and computational intelligence techniques has steadily increased to effectively perform image processing operations—such as segmentation, co-registration, classification, and dimensionality reduction—and multi-omics data integration.
- …