856 research outputs found

    Optimizing Alzheimer's disease prediction using the nomadic people algorithm

    Get PDF
    The problem with using microarray technology to detect diseases is that not each is analytically necessary. The presence of non-essential gene data adds a computing load to the detection method. Therefore, the purpose of this study is to reduce the high-dimensional data size by determining the most critical genes involved in Alzheimer's disease progression. A study also aims to predict patients with a subset of genes that cause Alzheimer's disease. This paper uses feature selection techniques like information gain (IG) and a novel metaheuristic optimization technique based on a swarm’s algorithm derived from nomadic people’s behavior (NPO). This suggested method matches the structure of these individuals' lives movements and the search for new food sources. The method is mostly based on a multi-swarm method; there are several clans, each seeking the best foraging opportunities. Prediction is carried out after selecting the informative genes of the support vector machine (SVM), frequently used in a variety of prediction tasks. The accuracy of the prediction was used to evaluate the suggested system's performance. Its results indicate that the NPO algorithm with the SVM model returns high accuracy based on the gene subset from IG and NPO methods

    An optimized approach for extensive segmentation and classification of brain MRI

    Get PDF
    With the significant contribution in medical image processing for an effective diagnosis of critical health condition in human, there has been evolution of various methods and techniques in abnormality detection and classification process. An insight to the existing approaches highlights that potential amount of work is being carried out in detection and segmentation process but less effective modelling towards classification problems. This manuscript discusses about a simple and robust modelling of a technique that offers comprehensive segmentation process as well as classification process using Artificial Neural Network. Different from any existing approach, the study offers more granularities towards foreground/background indexing with its comprehensive segmentation process while introducing a unique morphological operation along with graph-believe network for ensuring approximately 99% of accuracy of proposed system in contrast to existing learning scheme

    Early Identification of Alzheimer’s Disease Using Medical Imaging: A Review From a Machine Learning Approach Perspective

    Get PDF
    Alzheimer’s disease (AD) is the leading cause of dementia in aged adults, affecting up to 70% of the dementia patients, and posing a serious public health hazard in the twenty-first century. AD is a progressive, irreversible and neuro-degenerative disease with a long pre-clinical period, affecting brain cells leading to memory loss, misperception, learning problems, and improper decisions. Given its significance, presently no treatment options are available, although disease advancement can be retarded through medication. Unfortunately, AD is diagnosed at a very later stage, after irreversible damages to the brain cells have occurred, when there is no scope to prevent further cognitive decline. The use of non-invasive neuroimaging procedures capable of detecting AD at preliminary stages is crucial for providing treatment retarding disease progression, and has stood as a promising area of research. We conducted a comprehensive assessment of papers employing machine learning to predict AD using neuroimaging data. Most of the studies employed brain images from Alzheimer’s disease neuroimaging initiative (ADNI) dataset, consisting of magnetic resonance image (MRI) and positron emission tomography (PET) images. The most widely used method, the support vector machine (SVM), has a mean accuracy of 75.4 percent, whereas convolutional neural networks(CNN) have a mean accuracy of 78.5 percent. Better classification accuracy has been achieved by combining MRI and PET, rather using single neuroimaging technique. Overall, more complicated models, like deep learning, paired with multimodal and multidimensional data (neuroimaging, cognitive, clinical, behavioral and genetic) produced superlative results. However, promising results have been achieved, still there is a room for performance improvement of the proposed methods, providing assistance to healthcare professionals and clinician

    Machine Learning for Multiclass Classification and Prediction of Alzheimer\u27s Disease

    Get PDF
    Alzheimer\u27s disease (AD) is an irreversible neurodegenerative disorder and a common form of dementia. This research aims to develop machine learning algorithms that diagnose and predict the progression of AD from multimodal heterogonous biomarkers with a focus placed on the early diagnosis. To meet this goal, several machine learning-based methods with their unique characteristics for feature extraction and automated classification, prediction, and visualization have been developed to discern subtle progression trends and predict the trajectory of disease progression. The methodology envisioned aims to enhance both the multiclass classification accuracy and prediction outcomes by effectively modeling the interplay between the multimodal biomarkers, handle the missing data challenge, and adequately extract all the relevant features that will be fed into the machine learning framework, all in order to understand the subtle changes that happen in the different stages of the disease. This research will also investigate the notion of multitasking to discover how the two processes of multiclass classification and prediction relate to one another in terms of the features they share and whether they could learn from one another for optimizing multiclass classification and prediction accuracy. This research work also delves into predicting cognitive scores of specific tests over time, using multimodal longitudinal data. The intent is to augment our prospects for analyzing the interplay between the different multimodal features used in the input space to the predicted cognitive scores. Moreover, the power of modality fusion, kernelization, and tensorization have also been investigated to efficiently extract important features hidden in the lower-dimensional feature space without being distracted by those deemed as irrelevant. With the adage that a picture is worth a thousand words, this dissertation introduces a unique color-coded visualization system with a fully integrated machine learning model for the enhanced diagnosis and prognosis of Alzheimer\u27s disease. The incentive here is to show that through visualization, the challenges imposed by both the variability and interrelatedness of the multimodal features could be overcome. Ultimately, this form of visualization via machine learning informs on the challenges faced with multiclass classification and adds insight into the decision-making process for a diagnosis and prognosis

    Prediction of Alzheimer Disease using LeNet-CNN model with Optimal Adaptive Bilateral Filtering

    Get PDF
    Alzheimer's disease is a kind of degenerative dementia that causes progressively worsening memory loss and other cognitive and physical impairments over time. Mini-Mental State Examinations and other screening tools are helpful for early detection, but diagnostic MRI brain analysis is required. When Alzheimer's disease (AD) is detected in its earliest stages, patients may begin protective treatments before permanent brain damage has occurred. The characteristics of the lesion sites in AD affected role, as identified by MRI, exhibit great variety and are dispersed across the image space, as demonstrated in cross-sectional imaging investigations of the disease. Optimized Adaptive Bilateral filtering using a deep learning model was suggested as part of this study's approach toward this end. Denoising the pictures with the help of the suggested adaptive bilateral filter is the first stage (ABF). The ABF improves denoising in edge, detail, and homogenous areas separately. After then, the ABF is given a weight, and the Adaptive Equilibrium Optimizer is used to determine the best possible value for that weight (AEO). LeNet, a CNN model, is then used to complete the AD organization. The first step in using the LeNet-5 network model to identify AD is to study the model's structure and parameters. The ADNI experimental dataset was used to verify the suggested technique and compare it to other models. The experimental findings prove that the suggested method can achieve a classification accuracy of 97.43%, 98.09% specificity, 97.12% sensitivity, and 89.67% Kappa index. When compared against competing algorithms, the suggested model emerges victorious

    Development of Gaussian Learning Algorithms for Early Detection of Alzheimer\u27s Disease

    Get PDF
    Alzheimer’s disease (AD) is the most common form of dementia affecting 10% of the population over the age of 65 and the growing costs in managing AD are estimated to be $259 billion, according to data reported in the 2017 by the Alzheimer\u27s Association. Moreover, with cognitive decline, daily life of the affected persons and their families are severely impacted. Taking advantage of the diagnosis of AD and its prodromal stage of mild cognitive impairment (MCI), an early treatment may help patients preserve the quality of life and slow the progression of the disease, even though the underlying disease cannot be reversed or stopped. This research aims to develop Gaussian learning algorithms, natural language processing (NLP) techniques, and mathematical models to effectively delineate the MCI participants from the cognitively normal (CN) group, and identify the most significant brain regions and patterns of changes associated with the progression of AD. The focus will be placed on the earliest manifestations of the disease (early MCI or EMCI) to plan for effective curative/therapeutic interventions and protocols. Multiple modalities of biomarkers have been found to be significantly sensitive in assessing the progression of AD. In this work, several novel multimodal classification frameworks based on proposed Gaussian Learning algorithms are created and applied to neuroimaging data. Classification based on the combination of structural magnetic resonance imaging (MRI), positron emission tomography (PET), and cerebrospinal fluid (CSF) biomarkers is seen as the most reliable approach for high-accuracy classification. Additionally, changes in linguistic complexity may provide complementary information for the diagnosis and prognosis of AD. For this research endeavor, an NLP-oriented neuropsychological assessment is developed to automatically analyze the distinguishing characteristics of text data in MCI group versus those in CN group. Early findings suggest significant linguistic differences between CN and MCI subjects in terms of word usage, vocabulary, recall, fragmented sentences. In summary, the results obtained indicate a high potential of the neuroimaging-based classification and NLP-oriented assessment to be utilized as a practically computer aided diagnosis system for classification and prediction of AD and its prodromal stages. Future work will ultimately focus on early signs of AD that could help in the planning of curative and therapeutic intervention to slow the progression of the disease

    Unsupervised learning methods for identifying and evaluating disease clusters in electronic health records

    Get PDF
    Introduction Clustering algorithms are a class of algorithms that can discover groups of observations in complex data and are often used to identify subtypes of heterogeneous diseases in electronic health records (EHR). Evaluating clustering experiments for biological and clinical significance is a vital but challenging task due to the lack of consensus on best practices. As a result, the translation of findings from clustering experiments to clinical practice is limited. Aim The aim of this thesis was to investigate and evaluate approaches that enable the evaluation of clustering experiments using EHR. Methods We conducted a scoping review of clustering studies in EHR to identify common evaluation approaches. We systematically investigated the performance of the identified approaches using a cohort of Alzheimer's Disease (AD) patients as an exemplar comparing four different clustering methods (K-means, Kernel K-means, Affinity Propagation and Latent Class Analysis.). Using the same population, we developed and evaluated a method (MCHAMMER) that tested whether clusterable structures exist in EHR. To develop this method we tested several cluster validation indexes and methods of generating null data to see which are the best at discovering clusters. In order to enable the robust benchmarking of evaluation approaches, we created a tool that generated synthetic EHR data that contain known cluster labels across a range of clustering scenarios. Results Across 67 EHR clustering studies, the most popular internal evaluation metric was comparing cluster results across multiple algorithms (30% of studies). We examined this approach conducting a clustering experiment on AD patients using a population of 10,065 AD patients and 21 demographic, symptom and comorbidity features. K-means found 5 clusters, Kernel K means found 2 clusters, Affinity propagation found 5 and latent class analysis found 6. K-means 4 was found to have the best clustering solution with the highest silhouette score (0.19) and was more predictive of outcomes. The five clusters found were: typical AD (n=2026), non-typical AD (n=1640), cardiovascular disease cluster (n=686), a cancer cluster (n=1710) and a cluster of mental health issues, smoking and early disease onset (n=1528), which has been found in previous research as well as in the results of other clustering methods. We created a synthetic data generation tool which allows for the generation of realistic EHR clusters that can vary in separation and number of noise variables to alter the difficulty of the clustering problem. We found that decreasing cluster separation did increase cluster difficulty significantly whereas noise variables increased cluster difficulty but not significantly. To develop the tool to assess clusters existence we tested different methods of null dataset generation and cluster validation indices, the best performing null dataset method was the min max method and the best performing indices we Calinksi Harabasz index which had an accuracy of 94%, Davies Bouldin index (97%) silhouette score ( 93%) and BWC index (90%). We further found that when clusters were identified using the Calinski Harabasz index they were more likely to have significantly different outcomes between clusters. Lastly we repeated the initial clustering experiment, comparing 10 different pre-processing methods. The three best performing methods were RBF kernel (2 clusters), MCA (4 clusters) and MCA and PCA (6 clusters). The MCA approach gave the best results highest silhouette score (0.23) and meaningful clusters, producing 4 clusters; heart and circulatory( n=1379), early onset mental health (n=1761), male cluster with memory loss (n = 1823), female with more problem (n=2244). Conclusion We have developed and tested a series of methods and tools to enable the evaluation of EHR clustering experiments. We developed and proposed a novel cluster evaluation metric and provided a tool for benchmarking evaluation approaches in synthetic but realistic EHR

    Applications of Support Vector Machines as a Robust tool in High Throughput Virtual Screening

    Get PDF
    Chemical space is enormously huge but not all of it is pertinent for the drug designing. Virtual screening methods act as knowledge-based filters to discover the coveted novel lead molecules possessing desired pharmacological properties. Support Vector Machines (SVM) is a reliable virtual screening tool for prioritizing molecules with the required biological activity and minimum toxicity. It has to its credit inherent advantages such as support for noisy data mainly coming from varied high-throughput biological assays, high sensitivity, specificity, prediction accuracy and reduction in false positives. SVM-based classification methods can efficiently discriminate inhibitors from non-inhibitors, actives from inactives, toxic from non-toxic and promiscuous from non-promiscuous molecules. As the principles of drug design are also applicable for agrochemicals, SVM methods are being applied for virtual screening for pesticides too. The current review discusses the basic kernels and models used for binary discrimination and also features used for developing SVM-based scoring functions, which will enhance our understanding of molecular interactions. SVM modeling has also been compared by many researchers with other statistical methods such as Artificial Neural Networks, k-nearest neighbour (kNN), decision trees, partial least squares, etc. Such studies have also been discussed in this review. Moreover, a case study involving the use of SVM method for screening molecules for cancer therapy has been carried out and the preliminary results presented here indicate that the SVM is an excellent classifier for screening the molecules
    • …
    corecore