77 research outputs found

    Advancements and Breakthroughs in Ultrasound Imaging

    Get PDF
    Ultrasonic imaging is a powerful diagnostic tool available to medical practitioners, engineers and researchers today. Due to the relative safety, and the non-invasive nature, ultrasonic imaging has become one of the most rapidly advancing technologies. These rapid advances are directly related to the parallel advancements in electronics, computing, and transducer technology together with sophisticated signal processing techniques. This book focuses on state of the art developments in ultrasonic imaging applications and underlying technologies presented by leading practitioners and researchers from many parts of the world

    Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report

    Get PDF
    The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate

    Vascular Implications of COVID-19: Role of Radiological Imaging, Artificial Intelligence, and Tissue Characterization: A Special Report

    Get PDF
    The SARS-CoV-2 virus has caused a pandemic, infecting nearly 80 million people worldwide, with mortality exceeding six million. The average survival span is just 14 days from the time the symptoms become aggressive. The present study delineates the deep-driven vascular damage in the pulmonary, renal, coronary, and carotid vessels due to SARS-CoV-2. This special report addresses an important gap in the literature in understanding (i) the pathophysiology of vascular damage and the role of medical imaging in the visualization of the damage caused by SARS-CoV-2, and (ii) further understanding the severity of COVID-19 using artificial intelligence (AI)-based tissue characterization (TC). PRISMA was used to select 296 studies for AI-based TC. Radiological imaging techniques such as magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound were selected for imaging of the vasculature infected by COVID-19. Four kinds of hypotheses are presented for showing the vascular damage in radiological images due to COVID-19. Three kinds of AI models, namely, machine learning, deep learning, and transfer learning, are used for TC. Further, the study presents recommendations for improving AI-based architectures for vascular studies. We conclude that the process of vascular damage due to COVID-19 has similarities across vessel types, even though it results in multi-organ dysfunction. Although the mortality rate is ~2% of those infected, the long-term effect of COVID-19 needs monitoring to avoid deaths. AI seems to be penetrating the health care industry at warp speed, and we expect to see an emerging role in patient care, reduce the mortality and morbidity rate

    An automated system for the classification and segmentation of brain tumours in MRI images based on the modified grey level co-occurrence matrix

    Get PDF
    The development of an automated system for the classification and segmentation of brain tumours in MRI scans remains challenging due to high variability and complexity of the brain tumours. Visual examination of MRI scans to diagnose brain tumours is the accepted standard. However due to the large number of MRI slices that are produced for each patient this is becoming a time consuming and slow process that is also prone to errors. This study explores an automated system for the classification and segmentation of brain tumours in MRI scans based on texture feature extraction. The research investigates an appropriate technique for feature extraction and development of a three-dimensional segmentation method. This was achieved by the investigation and integration of several image processing methods that are related to texture features and segmentation of MRI brain scans. First, the MRI brain scans were pre-processed by image enhancement, intensity normalization, background segmentation and correcting the mid-sagittal plane (MSP) of the brain for any possible skewness in the patient’s head. Second, the texture features were extracted using modified grey level co-occurrence matrix (MGLCM) from T2-weighted (T2-w) MRI slices and classified into normal and abnormal using multi-layer perceptron neural network (MLP). The texture feature extraction method starts from the standpoint that the human brain structure is approximately symmetric around the MSP of the brain. The extracted features measure the degree of symmetry between the left and right hemispheres of the brain, which are used to detect the abnormalities in the brain. This will enable clinicians to reject the MRI brain scans of the patients who have normal brain quickly and focusing on those who have pathological brain features. Finally, the bounding 3D-boxes based genetic algorithm (BBBGA) was used to identify the location of the brain tumour and segments it automatically by using three-dimensional active contour without edge (3DACWE) method. The research was validated using two datasets; a real dataset that was collected from the MRI Unit in Al-Kadhimiya Teaching Hospital in Iraq in 2014 and the standard benchmark multimodal brain tumour segmentation (BRATS 2013) dataset. The experimental results on both datasets proved that the efficacy of the proposed system in the successful classification and segmentation of the brain tumours in MRI scans. The achieved classification accuracies were 97.8% for the collected dataset and 98.6% for the standard dataset. While the segmentation’s Dice scores were 89% for the collected dataset and 89.3% for the standard dataset

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Analysis of contrast-enhanced medical images.

    Get PDF
    Early detection of human organ diseases is of great importance for the accurate diagnosis and institution of appropriate therapies. This can potentially prevent progression to end-stage disease by detecting precursors that evaluate organ functionality. In addition, it also assists the clinicians for therapy evaluation, tracking diseases progression, and surgery operations. Advances in functional and contrast-enhanced (CE) medical images enabled accurate noninvasive evaluation of organ functionality due to their ability to provide superior anatomical and functional information about the tissue-of-interest. The main objective of this dissertation is to develop a computer-aided diagnostic (CAD) system for analyzing complex data from CE magnetic resonance imaging (MRI). The developed CAD system has been tested in three case studies: (i) early detection of acute renal transplant rejection, (ii) evaluation of myocardial perfusion in patients with ischemic heart disease after heart attack; and (iii), early detection of prostate cancer. However, developing a noninvasive CAD system for the analysis of CE medical images is subject to multiple challenges, including, but are not limited to, image noise and inhomogeneity, nonlinear signal intensity changes of the images over the time course of data acquisition, appearances and shape changes (deformations) of the organ-of-interest during data acquisition, determination of the best features (indexes) that describe the perfusion of a contrast agent (CA) into the tissue. To address these challenges, this dissertation focuses on building new mathematical models and learning techniques that facilitate accurate analysis of CAs perfusion in living organs and include: (i) accurate mathematical models for the segmentation of the object-of-interest, which integrate object shape and appearance features in terms of pixel/voxel-wise image intensities and their spatial interactions; (ii) motion correction techniques that combine both global and local models, which exploit geometric features, rather than image intensities to avoid problems associated with nonlinear intensity variations of the CE images; (iii) fusion of multiple features using the genetic algorithm. The proposed techniques have been integrated into CAD systems that have been tested in, but not limited to, three clinical studies. First, a noninvasive CAD system is proposed for the early and accurate diagnosis of acute renal transplant rejection using dynamic contrast-enhanced MRI (DCE-MRI). Acute rejection–the immunological response of the human immune system to a foreign kidney–is the most sever cause of renal dysfunction among other diagnostic possibilities, including acute tubular necrosis and immune drug toxicity. In the U.S., approximately 17,736 renal transplants are performed annually, and given the limited number of donors, transplanted kidney salvage is an important medical concern. Thus far, biopsy remains the gold standard for the assessment of renal transplant dysfunction, but only as the last resort because of its invasive nature, high cost, and potential morbidity rates. The diagnostic results of the proposed CAD system, based on the analysis of 50 independent in-vivo cases were 96% with a 95% confidence interval. These results clearly demonstrate the promise of the proposed image-based diagnostic CAD system as a supplement to the current technologies, such as nuclear imaging and ultrasonography, to determine the type of kidney dysfunction. Second, a comprehensive CAD system is developed for the characterization of myocardial perfusion and clinical status in heart failure and novel myoregeneration therapy using cardiac first-pass MRI (FP-MRI). Heart failure is considered the most important cause of morbidity and mortality in cardiovascular disease, which affects approximately 6 million U.S. patients annually. Ischemic heart disease is considered the most common underlying cause of heart failure. Therefore, the detection of the heart failure in its earliest forms is essential to prevent its relentless progression to premature death. While current medical studies focus on detecting pathological tissue and assessing contractile function of the diseased heart, this dissertation address the key issue of the effects of the myoregeneration therapy on the associated blood nutrient supply. Quantitative and qualitative assessment in a cohort of 24 perfusion data sets demonstrated the ability of the proposed framework to reveal regional perfusion improvements with therapy, and transmural perfusion differences across the myocardial wall; thus, it can aid in follow-up on treatment for patients undergoing the myoregeneration therapy. Finally, an image-based CAD system for early detection of prostate cancer using DCE-MRI is introduced. Prostate cancer is the most frequently diagnosed malignancy among men and remains the second leading cause of cancer-related death in the USA with more than 238,000 new cases and a mortality rate of about 30,000 in 2013. Therefore, early diagnosis of prostate cancer can improve the effectiveness of treatment and increase the patient’s chance of survival. Currently, needle biopsy is the gold standard for the diagnosis of prostate cancer. However, it is an invasive procedure with high costs and potential morbidity rates. Additionally, it has a higher possibility of producing false positive diagnosis due to relatively small needle biopsy samples. Application of the proposed CAD yield promising results in a cohort of 30 patients that would, in the near future, represent a supplement of the current technologies to determine prostate cancer type. The developed techniques have been compared to the state-of-the-art methods and demonstrated higher accuracy as shown in this dissertation. The proposed models (higher-order spatial interaction models, shape models, motion correction models, and perfusion analysis models) can be used in many of today’s CAD applications for early detection of a variety of diseases and medical conditions, and are expected to notably amplify the accuracy of CAD decisions based on the automated analysis of CE images

    Soft computing applied to optimization, computer vision and medicine

    Get PDF
    Artificial intelligence has permeated almost every area of life in modern society, and its significance continues to grow. As a result, in recent years, Soft Computing has emerged as a powerful set of methodologies that propose innovative and robust solutions to a variety of complex problems. Soft Computing methods, because of their broad range of application, have the potential to significantly improve human living conditions. The motivation for the present research emerged from this background and possibility. This research aims to accomplish two main objectives: On the one hand, it endeavors to bridge the gap between Soft Computing techniques and their application to intricate problems. On the other hand, it explores the hypothetical benefits of Soft Computing methodologies as novel effective tools for such problems. This thesis synthesizes the results of extensive research on Soft Computing methods and their applications to optimization, Computer Vision, and medicine. This work is composed of several individual projects, which employ classical and new optimization algorithms. The manuscript presented here intends to provide an overview of the different aspects of Soft Computing methods in order to enable the reader to reach a global understanding of the field. Therefore, this document is assembled as a monograph that summarizes the outcomes of these projects across 12 chapters. The chapters are structured so that they can be read independently. The key focus of this work is the application and design of Soft Computing approaches for solving problems in the following: Block Matching, Pattern Detection, Thresholding, Corner Detection, Template Matching, Circle Detection, Color Segmentation, Leukocyte Detection, and Breast Thermogram Analysis. One of the outcomes presented in this thesis involves the development of two evolutionary approaches for global optimization. These were tested over complex benchmark datasets and showed promising results, thus opening the debate for future applications. Moreover, the applications for Computer Vision and medicine presented in this work have highlighted the utility of different Soft Computing methodologies in the solution of problems in such subjects. A milestone in this area is the translation of the Computer Vision and medical issues into optimization problems. Additionally, this work also strives to provide tools for combating public health issues by expanding the concepts to automated detection and diagnosis aid for pathologies such as Leukemia and breast cancer. The application of Soft Computing techniques in this field has attracted great interest worldwide due to the exponential growth of these diseases. Lastly, the use of Fuzzy Logic, Artificial Neural Networks, and Expert Systems in many everyday domestic appliances, such as washing machines, cookers, and refrigerators is now a reality. Many other industrial and commercial applications of Soft Computing have also been integrated into everyday use, and this is expected to increase within the next decade. Therefore, the research conducted here contributes an important piece for expanding these developments. The applications presented in this work are intended to serve as technological tools that can then be used in the development of new devices

    Evolutionary Deep Convolutional Neural Networks for Medical Image Analysis

    Full text link
    Medical image segmentation is a procedure to analyse an image’s content to find an organ, cancer, tumour, or possible abnormalities. Since hospitals and medical centres generate billions of images daily worldwide, manual analysis of the images is frustrating. Therefore there is a need to improve automatic techniques to examine the content of images. Deep Convolutional Neural Networks (DCNNs) are one of the most reliable and successful approaches to analyse images’ content. However, the main problem is a lack of rules to design a network, and trial and error is the usual approach to find a network structure along with its training parameters. Regarding the diversity of medical images, existing with various types of noises and artefacts, the limited number of available labelled medical images, and limited available computational facilities, designing a CNN for medical image analysis is even more complicated. Because of the importance of medical image segmentation, during the last decade, various CNNs are designed manually; however, most of these networks work well for the segmentation of a specific dataset or application. One of the solutions to address this problem is developing networks automatically. Neuroevolution, which is the combination of an evolutionary algorithm and Neural Networks (NNs), can automatically design a network. Evolutionary algorithms are relatively easy to understand and implement; however, they need considerable computation to evolve a network. Since Nerouevolution is computationally demanding, there is very limited previous work regarding applying Neuroevolution for medical image segmentation. Existing works just set up a part of the parameters to develop a network and have been applied to a limited number of datasets. The most significant drawbacks of existing works are lack of robustness and generalizability; also, most of them are computationally expensive. In this thesis, several Neroevolutionary-based frameworks are developed for 2D and 3D medical image segmentation. Firstly, a new block-based encoding model is developed to generate variable length 2D Deep Convolutional Neural Networks (DCNNs). The proposed encoding model could find appropriate values for several hyperparameters to create and train a DCNN. Also, a Genetic Algorithm (GA) is employed to evolve the generated networks. Besides, a comprehensive analysis is done to find an appropriate population size and generations, and consequently, an improved model is developed. In addition, to improve the results’ quality, an ensemble of found networks is utilised for final segmentation. Then to find a 3D evolutionary network, two approaches are examined. According to the proposed 2D model, a 3D model is developed to generate a population of 3D networks and evolve the 3D networks to find an appropriate 3D network for 3D medical image segmentation. Since evolving 3D networks is computationally expensive, a second approach is also introduced. In the second approach, the possibility of using a 2D evolutionary model to create a 3D network is examined and named Converted 3D network. Because of the diversity of medical images and the complexity of medical image analysis, sometimes more complicated CNN is needed. To address this issue, also another evolutionary model is developed in this thesis to generate more accurate and complex DCNNs using the combination of Dense and Residual blocks. In the proposed DenseRes model, a new encoding model is introduced, which is able to create a variable-length network with variable filter sizes within a block. In the DenseRes model, all required parameters to generate and train a network are included in the search. Most of the time, the Region Of Interest (ROI) is a small part of a medical image with almost the similar colour and texture of the surrounding organs. Therefore, more precise network architectures, like attention networks, are needed to process the images. To do so, two different approaches are introduced in this thesis to develop evolutionary attention networks. First, a 2D evolutionary attention model is proposed that is able to find an appropriate attention gate to transfer the block’s input to its output. Since some useful information will be lost during the downsampling in DCNNs, another 2D and 3D evolutionary attention framework is introduced to address this issue. In this model, besides creating a network structure along with its training parameters, an evolutionary algorithm is employed to find an appropriate model to recover and transfer feature maps from downsampling to the upsampling part of a network. The effectiveness of the proposed models is examined using various publicly available datasets. Results are compared with multiple manual and automatically designed models. The significant findings of this thesis can summarise as: (1) the proposed models obtain much better segmentation accuracy compared to state-of-the-art models, also, the proposed models are computationally cheap, even for developing 3D evolutionary networks; (2) converting a 2D evolutionary model to a 3D model is a reliable, fast, and accurate approach to create 3D networks; (3) including more constructive parameters in the search space can lead to more precise networks; (4) the initial population plays a significant role in the final results and decreasing training time; moreover, using variable filter sizes within a block can obtain better results compared to using a fixed one; (5) recovering a downsampling’s feature maps and transferring them to the corresponding upsampling part can considerably improve segmentation accuracy; (6) the proposed models are robust and general such that they can be applied for the segmentation of various medical images (CT and MRI) for different organs and tumour segmentation; (7) all the proposed encoding models are compatible with conventional crossover and mutation techniques, without any extra effort to create a new crossover technique or using a method to check the correctness of layers’ sequences
    • …
    corecore