7 research outputs found

    Convolutional ensembles for Arabic Handwritten Character and Digit Recognition

    Get PDF
    A learning algorithm is proposed for the task of Arabic Handwritten Character and Digit recognition. The architecture consists on an ensemble of different Convolutional Neural Networks. The proposed training algorithm uses a combination of adaptive gradient descent on the first epochs and regular stochastic gradient descent in the last epochs, to facilitate convergence. Different validation strategies are tested, namely Monte Carlo Cross-Validation and K-fold Cross Validation. Hyper-parameter tuning was done by using the MADbase digits dataset. State of the art validation and testing classification accuracies were achieved, with average values of 99.74% and 99.47% respectively. The same algorithm was then trained and tested with the AHCD character dataset, also yielding state of the art validation and testing classification accuracies: 98.60% and 98.42% respectively

    Safe exposure distances for transcranial magnetic stimulation based on computer simulations

    Get PDF
    The results of a computer simulation examining the compliance of a given transcranial magnetic stimulation device to the 2010 International Commission on Non-Ionizing Radiation Protection (ICNIRP) guidelines are presented. The objective was to update the safe distance estimates with the most current safety guidelines, as well as comparing these to values reported in previous publications. The 3D data generated was compared against results available in the literature, regarding the MCB-70 coil by Medtronic. Regarding occupational exposure, safe distances of 1.46 m and 0.96 m are derived from the simulation according to the 2003 and 2010 ICNIRP guidelines, respectively. These values are then compared to safe distances previously reported in other studies

    Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases

    No full text
    An application of explainable artificial intelligence on medical data is presented. There is an increasing demand in machine learning literature for such explainable models in health-related applications. This work aims to generate explanations on how a Convolutional Neural Network (CNN) detects tumor tissue in patches extracted from histology whole slide images. This is achieved using the “locally-interpretable model-agnostic explanations” methodology. Two publicly-available convolutional neural networks trained on the Patch Camelyon Benchmark are analyzed. Three common segmentation algorithms are compared for superpixel generation, and a fourth simpler parameter-free segmentation algorithm is proposed. The main characteristics of the explanations are discussed, as well as the key patterns identified in true positive predictions. The results are compared to medical annotations and literature and suggest that the CNN predictions follow at least some aspects of human expert knowledge

    Explainable Artificial Intelligence for Bias Detection in COVID CT-Scan Classifiers

    No full text
    Problem: An application of Explainable Artificial Intelligence Methods for COVID CT-Scan classifiers is presented. Motivation: It is possible that classifiers are using spurious artifacts in dataset images to achieve high performances, and such explainable techniques can help identify this issue. Aim: For this purpose, several approaches were used in tandem, in order to create a complete overview of the classificatios. Methodology: The techniques used included GradCAM, LIME, RISE, Squaregrid, and direct Gradient approaches (Vanilla, Smooth, Integrated). Main results: Among the deep neural networks architectures evaluated for this image classification task, VGG16 was shown to be most affected by biases towards spurious artifacts, while DenseNet was notably more robust against them. Further impacts: Results further show that small differences in validation accuracies can cause drastic changes in explanation heatmaps for DenseNet architectures, indicating that small changes in validation accuracy may have large impacts on the biases learned by the networks. Notably, it is important to notice that the strong performance metrics achieved by all these networks (Accuracy, F1 score, AUC all in the 80 to 90% range) could give users the erroneous impression that there is no bias. However, the analysis of the explanation heatmaps highlights the bias

    Leishmania donovani Nucleoside Hydrolase terminal domains in cross-protective immunotherapy against Leishmania amazonensis murine infection

    Get PDF
    Nucleoside hydrolases of the Leishmania genus are vital enzymes for the replication of the DNA and conserved phylogenetic markers of the parasites. Leishmania donovani Nucleoside hydrolase (NH36) induced a main CD4+ T cell driven protective response against Leishmania chagasi infection in mice which is directed against its C-terminal domain. In this study, we used the three recombinant domains of NH36: N-terminal domain (F1, amino acids 1-103), central domain (F2 aminoacids 104-198) and C-terminal domain (F3 amino acids 199-314) in combination with saponin and assayed their immunotherapeutic effect on Balb/c mice previously infected with L. amazonensis. We identified that the F1 and F3 peptides determined strong cross-immunotherapeutic effects, reducing the size of footpad lesions to 48% and 64%, and the parasite load in footpads to 82.6% and 81%, respectively. The F3 peptide induced the strongest anti-NH36 antibody response and intradermal response (IDR) against L. amazonenis and a high secretion of IFN-γ and TNF-α with reduced levels of IL-10. The F1 vaccine, induced similar increases of IgG2b antibodies and IFN-γ and TNF-α levels, but no IDR and no reduction of IL-10. The multiparameter flow cytometry analysis was used to assess the immune response after immunotherapy and disclosed that the degree of the immunotherapeutic effect is predicted by the frequencies of the CD4+ and CD8+ T cells producing IL-2 or TNF-α or both. Total frequencies and frequencies of double-cytokine CD4 T cell producers were enhanced by F1 and F3 vaccines. Collectively, our multifunctional analysis disclosed that immunotherapeutic protection improved as the CD4 responses progressed from 1+ to 2+, in the case of the F1 and F3 vaccines, and as the CD8 responses changed qualitatively from 1+ to 3+, mainly in the case of the F1 vaccine, providing new correlates of immunotherapeutic protection against cutaneous leishmaniasis in mice based on T-helper TH1 and CD8+ mediated immune responses

    Sharpening local interpretable model-agnostic explanations for histopathology ::improved understandability and reliability

    No full text
    Being accountable for the signed reports, pathologists may be wary of high-quality deep learning outcomes if the decision-making is not understandable. Applying off-the-shelf methods with default configurations such as Local Interpretable Model-Agnostic Explanations (LIME) is not sufficient to generate stable and understandable explanations. This work improves the application of LIME to histopathology images by leveraging nuclei annotations, creating a reliable way for pathologists to audit black-box tumor classifiers. The obtained visualizations reveal the sharp, neat and high attention of the deep classifier to the neoplastic nuclei in the dataset, an observation in line with clinical decision making. Compared to standard LIME, our explanations show improved understandability for domain-experts, report higher stability and pass the sanity checks of consistency to data or initialization changes and sensitivity to network parameters. This represents a promising step in giving pathologists tools to obtain additional information on image classification models. The code and trained models are available on GitHub
    corecore