25 research outputs found

    Machine Learning for the Diagnosis of Autism Spectrum Disorder

    Get PDF
    Autism Spectrum Disorder (ASD) is a neurological disorder. It refers to a wide range of behavioral and social abnormality and causes problems with social skills, repetitive behaviors, speech, and nonverbal communication. Even though there is no exact cure to ASD, an early diagnosis can help the patient take precautionary steps. Diagnosis of ASD has been of great interest recently, as researchers are yet to find a specific biomarker to detect the disease successfully. For the diagnosis of ASD, subjects need to go through behavioral observation and interview, which are not accurate sometimes. Also, there is a lack of dissimilarity between neuroimages of ASD subjects and healthy control (HC) subjects which make the use of neuroimages difficult for the diagnosis. So, machine learning-based approaches to diagnose ASD are becoming popular day by day. In the machine learning-based approach, features are extracted either from the functional MRI images or the structural MRI images to build the models. In this study at first, I created brain networks from the resting-state functional MRI (rs-fMRI) images, by using the 264 regions based parcellation scheme. These 264 regions capture the functional activity of the brain more accurately compared to regions defined in other parcellation schemes. Next, I extracted spectrum as a raw feature and combined it with other network based topological centralities: assortativity, clustering coefficient, the average degree. By applying a feature selection algorithm on the extracted features, I reduced the dimension of the features to cope up with overfitting. Then I used the selected features in support vector machine (SVM), K-nearest neighbor (KNN), linear discriminant analysis (LDA), and logistic regression (LR) for the diagnosis of ASD. Using the proposed method on Autism Brain Imaging Data Exchange (ABIDE) I achieved the classification accuracy of 78.4% for LDA, 77.0% for LR, 73.5% for SVM, and 73.8% for KNN. Next, I built a deep neural network model for the classification and feature selection using the autoencoder. In this approach, I used the previously defined features to build the DNN classifier. The DNN classifier is pre-trained using the autoencoder. Due to the pre-training, there has been a significant increase in the performance of the DNN classifier. I also proposed an autoencoder based feature selector. The latent space representation of the autoencoder is used to create a discriminate and compressed representation of the features. To make a more discriminate representation, the autoencoder is pre-trained with the DNN classifier. The classification accuracy of the DNN classifier and the autoencoder based feature selector is 79.2% and 74.6%, respectively. Finally, I studied the structural MRI images and proposed a convolutional autoencoder (CAE) based classification model. The T1-weighted MRI images without any pre-processing are used in this study. As the effect of age is very important when studying the structural images for the diagnosis of ASD, I used the ABIDE 1 dataset, which covers subjects with a wide range of ages. Using the proposed CAE based diagnosis method, I achieved a classification accuracy of 96.6%, which is better than any other study for the diagnosis of ASD using the ABIDE 1 dataset. The results of this thesis demonstrate that the spectrum of the brain networks is an essential feature for the diagnosis of ASD and rather than extracting features from the structural MRI image a more efficient way is to use the images directly into deep learning models. The proposed studies in this thesis can help to build an early diagnosis model for ASD

    An efficient approach of secure group association management in densely deployed heterogeneous distributed sensor network

    Get PDF
    A heterogeneous distributed sensor network (HDSN) is a type of distributed sensor network where sensors with different deployment groups and different functional types participate at the same time. In other words, the sensors are divided into different deployment groups according to different types of data transmissions, but they cooperate with each other within and out of their respective groups. However, in traditional heterogeneous sensor networks, the classification is based on transmission range, energy level, computation ability, and sensing range. Taking this model into account, we propose a secure group association authentication mechanism using one-way accumulator which ensures that: before collaborating for a particular task, any pair of nodes in the same deployment group can verify the legitimacy of group association of each other. Secure addition and deletion of sensors are also supported in this approach. In addition, a policy-based sensor addition procedure is also suggested. For secure handling of disconnected nodes of a group, we use an efficient pairwise key derivation scheme to resist any adversary’s attempt. Along with proposing our mechanism, we also discuss the characteristics of HDSN, its scopes, applicability, future, and challenges. The efficiency of our security management approach is also demonstrated with performance evaluation and analysis

    12-segment display for the Bengali numerical characters

    Get PDF
    For representing the Bengali numerical characters the researchers have been working for a long time. In this paper, the idea of 12-segment display is introduced which ensures better outlook than the existing or proposed display systems. A 12-segment display for Bengali Numerical Characters needs 4-bit inputs for representing each digit. Appropriate logic circuits are also designed for that purpose

    Explainable deep learning in plant phenotyping

    Get PDF
    The increasing human population and variable weather conditions, due to climate change, pose a threat to the world's food security. To improve global food security, we need to provide breeders with tools to develop crop cultivars that are more resilient to extreme weather conditions and provide growers with tools to more effectively manage biotic and abiotic stresses in their crops. Plant phenotyping, the measurement of a plant's structural and functional characteristics, has the potential to inform, improve and accelerate both breeders' selections and growers' management decisions. To improve the speed, reliability and scale of plant phenotyping procedures, many researchers have adopted deep learning methods to estimate phenotypic information from images of plants and crops. Despite the successful results of these image-based phenotyping studies, the representations learned by deep learning models remain difficult to interpret, understand, and explain. For this reason, deep learning models are still considered to be black boxes. Explainable AI (XAI) is a promising approach for opening the deep learning model's black box and providing plant scientists with image-based phenotypic information that is interpretable and trustworthy. Although various fields of study have adopted XAI to advance their understanding of deep learning models, it has yet to be well-studied in the context of plant phenotyping research. In this review article, we reviewed existing XAI studies in plant shoot phenotyping, as well as related domains, to help plant researchers understand the benefits of XAI and make it easier for them to integrate XAI into their future studies. An elucidation of the representations within a deep learning model can help researchers explain the model's decisions, relate the features detected by the model to the underlying plant physiology, and enhance the trustworthiness of image-based phenotypic information used in food production systems

    Leveraging Guided Backpropagation to Select Convolutional Neural Networks for Plant Classification

    Get PDF
    The development of state-of-the-art convolutional neural networks (CNN) has allowed researchers to perform plant classification tasks previously thought impossible and rely on human judgment. Researchers often develop complex CNN models to achieve better performances, introducing over-parameterization and forcing the model to overfit on a training dataset. The most popular process for evaluating overfitting in a deep learning model is using accuracy and loss curves. Train and loss curves may help understand the performance of a model but do not provide guidance on how the model could be modified to attain better performance. In this article, we analyzed the relation between the features learned by a model and its capacity and showed that a model with higher representational capacity might learn many subtle features that may negatively affect its performance. Next, we showed that the shallow layers of a deep learning model learn more diverse features than the ones learned by the deeper layers. Finally, we propose SSIM cut curve, a new way to select the depth of a CNN model by using the pairwise similarity matrix between the visualization of the features learned at different depths by using Guided Backpropagation. We showed that our proposed method could potentially pave a new way to select a better CNN model.https://www.frontiersin.org/articles/10.3389/frai.2022.871162/ful

    AI is a viable alternative to high throughput screening: a 318-target study

    Get PDF
    : High throughput screening (HTS) is routinely used to identify bioactive small molecules. This requires physical compounds, which limits coverage of accessible chemical space. Computational approaches combined with vast on-demand chemical libraries can access far greater chemical space, provided that the predictive accuracy is sufficient to identify useful molecules. Through the largest and most diverse virtual HTS campaign reported to date, comprising 318 individual projects, we demonstrate that our AtomNet® convolutional neural network successfully finds novel hits across every major therapeutic area and protein class. We address historical limitations of computational screening by demonstrating success for target proteins without known binders, high-quality X-ray crystal structures, or manual cherry-picking of compounds. We show that the molecules selected by the AtomNet® model are novel drug-like scaffolds rather than minor modifications to known bioactive compounds. Our empirical results suggest that computational methods can substantially replace HTS as the first step of small-molecule drug discovery

    Future trends in security issues in internet and web applications

    No full text
    The massive proliferation of Web applications is observed in recent years as the web is embraced by millions of businesses and government sectors as an inexpensive channel to communicate and exchange information with prospects and transactions with customers. Web applications are usually used from a web browser and along with the typical informative site-surfing they cover a range of activities such as e-banking, webmail, online shopping, community websites, blogs, vlogs, network monitoring and bulletin boards while Internet is the implemented networking infrastructure that connects millions of computers together which could be located in different geographic locations

    Simulation technologies in networking and communications: selecting the best tool for the test

    No full text
    Simulation is a widely used mechanism for validating the theoretical models of networking and communication systems. Although the claims made based on simulations are considered to be reliable, how reliable they really are is best determined with real-world implementation trials.Simulation Technologies in Networking and Communications: Selecting the Best Tool for the Test addresses the spectrum of issues regarding the different mechanisms related to simulation technologies in networking and communications fields. Focusing on the practice of simulation testing instead of the theory, it present
    corecore