3,793 research outputs found

    Caps & Capes - Volume I Issue IV

    Get PDF

    Periodicity in wide-band time series

    Get PDF
    Summary: To test the hypotheses that (i) electroencephalograms (EEGs) are largely made up of oscillations at many frequencies and (ii) that the peaks in the power spectra represent oscillations, we applied a new method, called the Period Specific Average (PSA) to a wide sample of EEGs. Both hypotheses can be rejected

    Region-based Convolutional Neural Network Driven Alzheimer’s Severity Prediction

    Get PDF
    It's important to note that Alzheimer's disease can also affect individuals over the age of 60, and in fact, the risk of developing Alzheimer's increases with age. Additionally, while deep learning approaches have shown promising results in detecting Alzheimer's disease, they are not the only techniques available for diagnosis and treatment. That being said, using Region-based Convolutional Neural Network (RCNN) for efficient feature extraction and classification can be a valuable tool in detecting Alzheimer's disease. This new approach to identifying Alzheimer's disease could lead to a more accurate and personalized diagnosis. It can also help in early treatment and intervention. However, it's still important to continue developing new methods and techniques for this disorder. Considering this our work proposes an innovative Region-based Convolutional Neural Network Driven Alzheimer’s Severity Prediction approach in this paper. The exhaustive experimental result carried out, which proves the efficacy of our Alzheimer prediction system

    LIC-GAN: Language Information Conditioned Graph Generative GAN Model

    Full text link
    Deep generative models for Natural Language data offer a new angle on the problem of graph synthesis: by optimizing differentiable models that directly generate graphs, it is possible to side-step expensive search procedures in the discrete and vast space of possible graphs. We introduce LIC-GAN, an implicit, likelihood-free generative model for small graphs that circumvents the need for expensive graph matching procedures. Our method takes as input a natural language query and using a combination of language modelling and Generative Adversarial Networks (GANs) and returns a graph that closely matches the description of the query. We combine our approach with a reward network to further enhance the graph generation with desired properties. Our experiments, show that LIC-GAN does well on metrics such as PropMatch and Closeness getting scores of 0.36 and 0.48. We also show that LIC-GAN performs as good as ChatGPT, with ChatGPT getting scores of 0.40 and 0.42. We also conduct a few experiments to demonstrate the robustness of our method, while also highlighting a few interesting caveats of the model.Comment: 15 pages, 8 figure

    Explainable Machine Learning for Robust Modelling in Healthcare

    Get PDF
    Deep Learning (DL) has seen an unprecedented rise in popularity over the last decade, with applications ranging from machine translation to self-driving cars. This includes extensive work in sensitive domains such as healthcare and finance with, for example, models recently achieving better-than-human performance in tasks such as chest x-ray diagnosis. However, despite these impressive results there are relatively few real-world deployments of DL models in sensitive scenarios, with experts claiming this is due to a lack of model transparency, reproducibility, robustness and privacy; this is in spite of numerous techniques having been proposed to address these issues. Most notably is the development of Explainable Deep Learning techniques, which aim to compute feature importance values for a given input (i.e. which features does a model use to make its decision?) - such methods can greatly improve the transparency of a model, but have little impact on reproducibility, robustness and privacy. In this thesis, I explore how explainability techniques can be used to address these issues, by using feature attributions to improve our understanding of how model parameters change during training, and across different hyperparameter setups. Through the introduction of a novel model architecture and training technique that used model explanations to improve model consistency, I show how explanations can improve privacy, robustness and reproducibility. Extensive experimentation is carried out across a number of sensitive datasets from healthcare and bioinformatics in both traditional and federated learning settings show that these techniques have a significant impact on the quality of these models. I discuss the impact these results could have on real-world applications of deep learning, due to the issues addressed by the proposed techniques, and present some ideas for further research in this area
    • …
    corecore