171 research outputs found

    3D Multimodal Brain Tumor Segmentation and Grading Scheme based on Machine, Deep, and Transfer Learning Approaches

    Get PDF
    Glioma is one of the most common tumors of the brain. The detection and grading of glioma at an early stage is very critical for increasing the survival rate of the patients. Computer-aided detection (CADe) and computer-aided diagnosis (CADx) systems are essential and important tools that provide more accurate and systematic results to speed up the decision-making process of clinicians. In this paper, we introduce a method consisting of the variations of the machine, deep, and transfer learning approaches for the effective brain tumor (i.e., glioma) segmentation and grading on the multimodal brain tumor segmentation (BRATS) 2020 dataset. We apply popular and efficient 3D U-Net architecture for the brain tumor segmentation phase. We also utilize 23 different combinations of deep feature sets and machine learning/fine-tuned deep learning CNN models based on Xception, IncResNetv2, and EfficientNet by using 4 different feature sets and 6 learning models for the tumor grading phase. The experimental results demonstrate that the proposed method achieves 99.5% accuracy rate for slice-based tumor grading on BraTS 2020 dataset. Moreover, our method is found to have competitive performance with similar recent works

    Image Segmentation and Classification for Medical Image Processing

    Get PDF
    Segmentation and labeling remains the weakest step in many medical vision applications. This paper illustrates an approach based on watershed transform which are designed to solve typical problems encountered in various applications, and which are controllable through adaptation of their parameters. Two of these modules are presented: the lung cancer detection, a method for the segmentation of cancer regions from CT images, a watershed algorithm for image segmentation and brain tumor detection from MRI images. Various GLCM features along with some statistical features are used for classification using Neural network and Support Vector Machine (SVM). We describe the principles of the algorithms and illustrate their generic properties by discussing the results of both applications in 2D MRI images of Brain tumor and CT images of lung cancer

    Advanced machine learning methods for oncological image analysis

    Get PDF
    Cancer is a major public health problem, accounting for an estimated 10 million deaths worldwide in 2020 alone. Rapid advances in the field of image acquisition and hardware development over the past three decades have resulted in the development of modern medical imaging modalities that can capture high-resolution anatomical, physiological, functional, and metabolic quantitative information from cancerous organs. Therefore, the applications of medical imaging have become increasingly crucial in the clinical routines of oncology, providing screening, diagnosis, treatment monitoring, and non/minimally- invasive evaluation of disease prognosis. The essential need for medical images, however, has resulted in the acquisition of a tremendous number of imaging scans. Considering the growing role of medical imaging data on one side and the challenges of manually examining such an abundance of data on the other side, the development of computerized tools to automatically or semi-automatically examine the image data has attracted considerable interest. Hence, a variety of machine learning tools have been developed for oncological image analysis, aiming to assist clinicians with repetitive tasks in their workflow. This thesis aims to contribute to the field of oncological image analysis by proposing new ways of quantifying tumor characteristics from medical image data. Specifically, this thesis consists of six studies, the first two of which focus on introducing novel methods for tumor segmentation. The last four studies aim to develop quantitative imaging biomarkers for cancer diagnosis and prognosis. The main objective of Study I is to develop a deep learning pipeline capable of capturing the appearance of lung pathologies, including lung tumors, and integrating this pipeline into the segmentation networks to leverage the segmentation accuracy. The proposed pipeline was tested on several comprehensive datasets, and the numerical quantifications show the superiority of the proposed prior-aware DL framework compared to the state of the art. Study II aims to address a crucial challenge faced by supervised segmentation models: dependency on the large-scale labeled dataset. In this study, an unsupervised segmentation approach is proposed based on the concept of image inpainting to segment lung and head- neck tumors in images from single and multiple modalities. The proposed autoinpainting pipeline shows great potential in synthesizing high-quality tumor-free images and outperforms a family of well-established unsupervised models in terms of segmentation accuracy. Studies III and IV aim to automatically discriminate the benign from the malignant pulmonary nodules by analyzing the low-dose computed tomography (LDCT) scans. In Study III, a dual-pathway deep classification framework is proposed to simultaneously take into account the local intra-nodule heterogeneities and the global contextual information. Study IV seeks to compare the discriminative power of a series of carefully selected conventional radiomics methods, end-to-end Deep Learning (DL) models, and deep features-based radiomics analysis on the same dataset. The numerical analyses show the potential of fusing the learned deep features into radiomic features for boosting the classification power. Study V focuses on the early assessment of lung tumor response to the applied treatments by proposing a novel feature set that can be interpreted physiologically. This feature set was employed to quantify the changes in the tumor characteristics from longitudinal PET-CT scans in order to predict the overall survival status of the patients two years after the last session of treatments. The discriminative power of the introduced imaging biomarkers was compared against the conventional radiomics, and the quantitative evaluations verified the superiority of the proposed feature set. Whereas Study V focuses on a binary survival prediction task, Study VI addresses the prediction of survival rate in patients diagnosed with lung and head-neck cancer by investigating the potential of spherical convolutional neural networks and comparing their performance against other types of features, including radiomics. While comparable results were achieved in intra- dataset analyses, the proposed spherical-based features show more predictive power in inter-dataset analyses. In summary, the six studies incorporate different imaging modalities and a wide range of image processing and machine-learning techniques in the methods developed for the quantitative assessment of tumor characteristics and contribute to the essential procedures of cancer diagnosis and prognosis

    Role of Artificial Intelligence in Radiogenomics for Cancers in the Era of Precision Medicine

    Get PDF
    Radiogenomics, a combination of “Radiomics” and “Genomics,” using Artificial Intelligence (AI) has recently emerged as the state-of-the-art science in precision medicine, especially in oncology care. Radiogenomics syndicates large-scale quantifiable data extracted from radiological medical images enveloped with personalized genomic phenotypes. It fabricates a prediction model through various AI methods to stratify the risk of patients, monitor therapeutic approaches, and assess clinical outcomes. It has recently shown tremendous achievements in prognosis, treatment planning, survival prediction, heterogeneity analysis, reoccurrence, and progression-free survival for human cancer study. Although AI has shown immense performance in oncology care in various clinical aspects, it has several challenges and limitations. The proposed review provides an overview of radiogenomics with the viewpoints on the role of AI in terms of its promises for computa-tional as well as oncological aspects and offers achievements and opportunities in the era of precision medicine. The review also presents various recommendations to diminish these obstacles

    Artificial intelligence in cancer imaging: Clinical challenges and applications

    Get PDF
    Judgement, as one of the core tenets of medicine, relies upon the integration of multilayered data with nuanced decision making. Cancer offers a unique context for medical decisions given not only its variegated forms with evolution of disease but also the need to take into account the individual condition of patients, their ability to receive treatment, and their responses to treatment. Challenges remain in the accurate detection, characterization, and monitoring of cancers despite improved technologies. Radiographic assessment of disease most commonly relies upon visual evaluations, the interpretations of which may be augmented by advanced computational analyses. In particular, artificial intelligence (AI) promises to make great strides in the qualitative interpretation of cancer imaging by expert clinicians, including volumetric delineation of tumors over time, extrapolation of the tumor genotype and biological course from its radiographic phenotype, prediction of clinical outcome, and assessment of the impact of disease and treatment on adjacent organs. AI may automate processes in the initial interpretation of images and shift the clinical workflow of radiographic detection, management decisions on whether or not to administer an intervention, and subsequent observation to a yet to be envisioned paradigm. Here, the authors review the current state of AI as applied to medical imaging of cancer and describe advances in 4 tumor types (lung, brain, breast, and prostate) to illustrate how common clinical problems are being addressed. Although most studies evaluating AI applications in oncology to date have not been vigorously validated for reproducibility and generalizability, the results do highlight increasingly concerted efforts in pushing AI technology to clinical use and to impact future directions in cancer care

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Detection of Myofascial Trigger Points With Ultrasound Imaging and Machine Learning

    Get PDF
    Myofascial Pain Syndrome (MPS) is a common chronic muscle pain disorder that affects a large portion of the global population, seen in 85-93% of patients in specialty pain clinics [10]. MPS is characterized by hard, palpable nodules caused by a stiffened taut band of muscle fibers. These nodules are referred to as Myofascial Trigger Points (MTrPs) and can be classified by two states: active MTrPs (A-MTrPs) and latent MtrPs (L-MTrPs). Treatment for MPS involves massage therapy, acupuncture, and injections or painkillers. Given the subjectivity of patient pain quantification, MPS can often lead to mistreatment or drug misuse. A deterministic way to quantify the pain is needed for better diagnosis and treatment. Various medical imaging technologies have been used to try to find quantifiable and measurable biomarkers of MTrPs. Ultrasound imaging, with it’s accessibility and variety of modalities, has shown significant findings in identifying MTrPs. Elastography ultrasound, which is used for measuring stiffness in soft tissues, has shown that MTrPs tend to be stiffer than normal muscle tissue. Doppler ultrasound has shown that bloodflow velocities differ significantly in areas surrounding MTrPs. MTrPs have been identified in standard B-mode grayscale ultrasound, but have varying conclusions with some studies identifying them as dark hypoechoic blobs and other studies showing them as bright hyperechoic blobs. Despite these discoveries, there is a high variance among results with no correlations to severity or pain. As a step towards quantifying the pain associated with MTrPs, this work aims to introduce a machine learning approach using image processing with texture recognition to detect MTrPs in Bmode ultrasound. A texture recognition algorithm called Gray Level Co-Occurrence Matrix (GLCM) is used to extract texture features from the B-mode ultrasound image. Feature maps are generated to emphasize these texture features in an image format in anticipation that a deep convolutional neural network will be able to correlate the features with the presence of a MTrP. The GLCM feature maps are compared to the elastography ultrasound to determine any correlations with muscle stiffness and then evaluated in the presence of MTrPs. The feature map generation is accelerated with a GPU-based implementation for the goal of real-time processing and inference of the machine learning model. Finally, two deep learning models are implemented to detect MTrPs comparing the effect of using GLCM feature maps of B-mode ultrasound to emphasize texture features for machine learning model inputs

    Artifical intelligence in rectal cancer

    Get PDF
    • …
    corecore