1,500 research outputs found

    Medical Diagnosis with Multimodal Image Fusion Techniques

    Get PDF
    Image Fusion is an effective approach utilized to draw out all the significant information from the source images, which supports experts in evaluation and quick decision making. Multi modal medical image fusion produces a composite fused image utilizing various sources to improve quality and extract complementary information. It is extremely challenging to gather every piece of information needed using just one imaging method. Therefore, images obtained from different modalities are fused Additional clinical information can be gleaned through the fusion of several types of medical image pairings. This study's main aim is to present a thorough review of medical image fusion techniques which also covers steps in fusion process, levels of fusion, various imaging modalities with their pros and cons, and  the major scientific difficulties encountered in the area of medical image fusion. This paper also summarizes the quality assessments fusion metrics. The various approaches used by image fusion algorithms that are presently available in the literature are classified into four broad categories i) Spatial fusion methods ii) Multiscale Decomposition based methods iii) Neural Network based methods and iv) Fuzzy Logic based methods. the benefits and pitfalls of the existing literature are explored and Future insights are suggested. Moreover, this study is anticipated to create a solid platform for the development of better fusion techniques in medical applications

    Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI

    Get PDF
    Background: Prostate cancer is one of the most common forms of cancer found in males making early diagnosis important. Magnetic resonance imaging (MRI) has been useful in visualizing and localizing tumor candidates and with the use of endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The coils introduce intensity inhomogeneities and the surface coil intensity correction built into MRI scanners is used to reduce these inhomogeneities. However, the correction typically performed at the MRI scanner level leads to noise amplification and noise level variations. Methods: In this study, we introduce a new Monte Carlo-based noise compensation approach for coil intensity corrected endorectal MRI which allows for effective noise compensation and preservation of details within the prostate. The approach accounts for the ERC SNR profile via a spatially-adaptive noise model for correcting non-stationary noise variations. Such a method is useful particularly for improving the image quality of coil intensity corrected endorectal MRI data performed at the MRI scanner level and when the original raw data is not available. Results: SNR and contrast-to-noise ratio (CNR) analysis in patient experiments demonstrate an average improvement of 11.7 dB and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong performance when compared to existing approaches. Conclusions: A new noise compensation method was developed for the purpose of improving the quality of coil intensity corrected endorectal MRI data performed at the MRI scanner level. We illustrate that promising noise compensation performance can be achieved for the proposed approach, which is particularly important for processing coil intensity corrected endorectal MRI data performed at the MRI scanner level and when the original raw data is not available.Comment: 23 page

    Decision-based data fusion of complementary features for the early diagnosis of Alzheimer\u27s disease

    Get PDF
    As the average life expectancy increases, particularly in developing countries, the prevalence of Alzheimer\u27s disease (AD), which is the most common form of dementia worldwide, has increased dramatically. As there is no cure to stop or reverse the effects of AD, the early diagnosis and detection is of utmost concern. Recent pharmacological advances have shown the ability to slow the progression of AD; however, the efficacy of these treatments is dependent on the ability to detect the disease at the earliest stage possible. Many patients are limited to small community clinics, by geographic and/or financial constraints. Making diagnosis possible at these clinics through an accurate, inexpensive, and noninvasive tool is of great interest. Many tools have been shown to be effective at the early diagnosis of AD. Three in particular are focused upon in this study: event-related potentials (ERPs) in electroencephalogram (EEG) recordings, magnetic resonance imaging (MRI), as well as positron emission tomography (PET). These biomarkers have been shown to contain diagnostically useful information regarding the development of AD in an individual. The combination of these biomarkers, if they provide complementary information, can boost overall diagnostic accuracy of an automated system. EEG data acquired from an auditory oddball paradigm, along with volumetric T2 weighted MRI data and PET imagery representative of metabolic glucose activity in the brain was collected from a cohort of 447 patients, along with other biomarkers and metrics relating to neurodegenerative disease. This study in particular focuses on AD versus control diagnostic ability from the cohort, in addition to AD severity analysis. An assortment of feature extraction methods were employed to extract diagnostically relevant information from raw data. EEG signals were decomposed into frequency bands of interest hrough the discrete wavelet transform (DWT). MRI images were reprocessed to provide volumetric representations of specific regions of interest in the cranium. The PET imagery was segmented into regions of interest representing glucose metabolic rates within the brain. Multi-layer perceptron neural networks were used as the base classifier for the augmented stacked generalization algorithm, creating three overall biomarker experts for AD diagnosis. The features extracted from each biomarker were used to train classifiers on various subsets of the cohort data; the decisions from these classifiers were then combined to achieve decision-based data fusion. This study found that EEG, MRI and PET data each hold complementary information for the diagnosis of AD. The use of all three in tandem provides greater diagnostic accuracy than using any single biomarker alone. The highest accuracy obtained through the EEG expert was 86.1 Ā±3.2%, with MRI and PET reaching 91.1 +3.2% and 91.2 Ā±3.9%, respectively. The maximum diagnostic accuracy of these systems averaged 95.0 Ā±3.1% when all three biomarkers were combined through the decision fusion algorithm described in this study. The severity analysis for AD showed similar results, with combination performance exceeding that of any biomarker expert alone

    Ensemble of classifiers based data fusion of EEG and MRI for diagnosis of neurodegenerative disorders

    Get PDF
    The prevalence of Alzheimer\u27s disease (AD), Parkinson\u27s disease (PD), and mild cognitive impairment (MCI) are rising at an alarming rate as the average age of the population increases, especially in developing nations. The efficacy of the new medical treatments critically depends on the ability to diagnose these diseases at the earliest stages. To facilitate the availability of early diagnosis in community hospitals, an accurate, inexpensive, and noninvasive diagnostic tool must be made available. As biomarkers, the event related potentials (ERP) of the electroencephalogram (EEG) - which has previously shown promise in automated diagnosis - in addition to volumetric magnetic resonance imaging (MRI), are relatively low cost and readily available tools that can be used as an automated diagnosis tool. 16-electrode EEG data were collected from 175 subjects afflicted with Alzheimer\u27s disease, Parkinson\u27s disease, mild cognitive impairment, as well as non-disease (normal control) subjects. T2 weighted MRI volumetric data were also collected from 161 of these subjects. Feature extraction methods were used to separate diagnostic information from the raw data. The EEG signals were decomposed using the discrete wavelet transform in order to isolate informative frequency bands. The MR images were processed through segmentation software to provide volumetric data of various brain regions in order to quantize potential brain tissue atrophy. Both of these data sources were utilized in a pattern recognition based classification algorithm to serve as a diagnostic tool for Alzheimer\u27s and Parkinson\u27s disease. Support vector machine and multilayer perceptron classifiers were used to create a classification algorithm trained with the EEG and MRI data. Extracted features were used to train individual classifiers, each learning a particular subset of the training data, whose decisions were combined using decision level fusion. Additionally, a severity analysis was performed to diagnose between various stages of AD as well as a cognitively normal state. The study found that EEG and MRI data hold complimentary information for the diagnosis of AD as well as PD. The use of both data types with a decision level fusion improves diagnostic accuracy over the diagnostic accuracy of each individual data source. In the case of AD only diagnosis, ERP data only provided a 78% diagnostic performance, MRI alone was 89% and ERP and MRI combined was 94%. For PD only diagnosis, ERP only performance was 67%, MRI only was 70%, and combined performance was 78%. MCI only diagnosis exhibited a similar effect with a 71% ERP performance, 82% MRI performance, and 85% combined performance. Diagnosis among three subject groups showed the same trend. For PD, AD, and normal diagnosis ERP only performance was 43%, MRI only was 66%, and combined performance was 71%. The severity analysis for mild AD, severe AD, and normal subjects showed the same combined effect

    Novel Computer-Aided Diagnosis Schemes for Radiological Image Analysis

    Get PDF
    The computer-aided diagnosis (CAD) scheme is a powerful tool in assisting clinicians (e.g., radiologists) to interpret medical images more accurately and efficiently. In developing high-performing CAD schemes, classic machine learning (ML) and deep learning (DL) algorithms play an essential role because of their advantages in capturing meaningful patterns that are important for disease (e.g., cancer) diagnosis and prognosis from complex datasets. This dissertation, organized into four studies, investigates the feasibility of developing several novel ML-based and DL-based CAD schemes for different cancer research purposes. The first study aims to develop and test a unique radiomics-based CT image marker that can be used to detect lymph node (LN) metastasis for cervical cancer patients. A total of 1,763 radiomics features were first computed from the segmented primary cervical tumor depicted on one CT image with the maximal tumor region. Next, a principal component analysis algorithm was applied on the initial feature pool to determine an optimal feature cluster. Then, based on this optimal cluster, machine learning models (e.g., support vector machine (SVM)) were trained and optimized to generate an image marker to detect LN metastasis. The SVM based imaging marker achieved an AUC (area under the ROC curve) value of 0.841 Ā± 0.035. This study initially verifies the feasibility of combining CT images and the radiomics technology to develop a low-cost image marker for LN metastasis detection among cervical cancer patients. In the second study, the purpose is to develop and evaluate a unique global mammographic image feature analysis scheme to identify case malignancy for breast cancer. From the entire breast area depicted on the mammograms, 59 features were initially computed to characterize the breast tissue properties in both the spatial and frequency domain. Given that each case consists of two cranio-caudal and two medio-lateral oblique view images of left and right breasts, two feature pools were built, which contain the computed features from either two positive images of one breast or all the four images of two breasts. For each feature pool, a particle swarm optimization (PSO) method was applied to determine the optimal feature cluster followed by training an SVM classifier to generate a final score for predicting likelihood of the case being malignant. The classification performances measured by AUC were 0.79Ā±0.07 and 0.75Ā±0.08 when applying the SVM classifiers trained using image features computed from two-view and four-view images, respectively. This study demonstrates the potential of developing a global mammographic image feature analysis-based scheme to predict case malignancy without including an arduous segmentation of breast lesions. In the third study, given that the performance of DL-based models in the medical imaging field is generally bottlenecked by a lack of sufficient labeled images, we specifically investigate the effectiveness of applying the latest transferring generative adversarial networks (GAN) technology to augment limited data for performance boost in the task of breast mass classification. This transferring GAN model was first pre-trained on a dataset of 25,000 mammogram patches (without labels). Then its generator and the discriminator were fine-tuned on a much smaller dataset containing 1024 labeled breast mass images. A supervised loss was integrated with the discriminator, such that it can be used to directly classify the benign/malignant masses. Our proposed approach improved the classification accuracy by 6.002%, when compared with the classifiers trained without traditional data augmentation. This investigation may provide a new perspective for researchers to effectively train the GAN models on a medical imaging task with only limited datasets. Like the third study, our last study also aims to alleviate DL modelsā€™ reliance on large amounts of annotations but uses a totally different approach. We propose employing a semi-supervised method, i.e., virtual adversarial training (VAT), to learn and leverage useful information underlying in unlabeled data for better classification of breast masses. Accordingly, our VAT-based models have two types of losses, namely supervised and virtual adversarial losses. The former loss acts as in supervised classification, while the latter loss works towards enhancing the modelā€™s robustness against virtual adversarial perturbation, thus improving model generalizability. A large CNN and a small CNN were used in this investigation, and both were trained with and without the adversarial loss. When the labeled ratios were 40% and 80%, VAT-based CNNs delivered the highest classification accuracy of 0.740Ā±0.015 and 0.760Ā±0.015, respectively. The experimental results suggest that the VAT-based CAD scheme can effectively utilize meaningful knowledge from unlabeled data to better classify mammographic breast mass images. In summary, several innovative approaches have been investigated and evaluated in this dissertation to develop ML-based and DL-based CAD schemes for the diagnosis of cervical cancer and breast cancer. The promising results demonstrate the potential of these CAD schemes in assisting radiologists to achieve a more accurate interpretation of radiological images

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    Registration and Fusion of the Autofluorescent and Infrared Retinal Images

    Get PDF
    This article deals with registration and fusion of multimodal opththalmologic images obtained by means of a laser scanning device (Heidelberg retina angiograph). The registration framework has been designed and tested for combination of autofluorescent and infrared images. This process is a necessary step for consecutive pixel level fusion and analysis utilizing information from both modalities. Two fusion methods are presented and compared

    Biomedical Sensing and Imaging

    Get PDF
    This book mainly deals with recent advances in biomedical sensing and imaging. More recently, wearable/smart biosensors and devices, which facilitate diagnostics in a non-clinical setting, have become a hot topic. Combined with machine learning and artificial intelligence, they could revolutionize the biomedical diagnostic field. The aim of this book is to provide a research forum in biomedical sensing and imaging and extend the scientific frontier of this very important and significant biomedical endeavor

    Pixel-level Image Fusion Algorithms for Multi-camera Imaging System

    Get PDF
    This thesis work is motivated by the potential and promise of image fusion technologies in the multi sensor image fusion system and applications. With specific focus on pixel level image fusion, the process after the image registration is processed, we develop graphic user interface for multi-sensor image fusion software using Microsoft visual studio and Microsoft Foundation Class library. In this thesis, we proposed and presented some image fusion algorithms with low computational cost, based upon spatial mixture analysis. The segment weighted average image fusion combines several low spatial resolution data source from different sensors to create high resolution and large size of fused image. This research includes developing a segment-based step, based upon stepwise divide and combine process. In the second stage of the process, the linear interpolation optimization is used to sharpen the image resolution. Implementation of these image fusion algorithms are completed based on the graphic user interface we developed. Multiple sensor image fusion is easily accommodated by the algorithm, and the results are demonstrated at multiple scales. By using quantitative estimation such as mutual information, we obtain the experiment quantifiable results. We also use the image morphing technique to generate fused image sequence, to simulate the results of image fusion. While deploying our pixel level image fusion algorithm approaches, we observe several challenges from the popular image fusion methods. While high computational cost and complex processing steps of image fusion algorithms provide accurate fused results, they also makes it hard to become deployed in system and applications that require real-time feedback, high flexibility and low computation abilit
    • ā€¦
    corecore