25 research outputs found

    Visualization and Analysis of Transformer Attention

    Get PDF
    The capability to select the relevant portion of the input is a key feature to limit the sensory input and focus on the most informative collected part. The transformer architecture is among the most performing deep neural network architectures due to the attention mechanism. The attention allows us to spot relevant connections between portions of the images and highlight these connections. Since the model is complex, it is not easy to determine which are these connections and the important areas. We discuss a technique to show these areas and highlight the regions most relevant for label attribution

    Explainable Histopathology Image Classification with Self-organizing Maps: A Granular Computing Perspective

    Get PDF
    The automatic analysis of histology images is an open research field where machine learning techniques and neural networks, especially deep architectures, are considered successful tools due to their abilities in image classification. This paper proposes a granular computing methodology for histopathological image classification. It is based on embedding tiles of histopathology images using deep metric learning, where a self-organizing map is adopted to generate the granular structure in this learned embedding space. The SOM enables the implementation of an explainable mechanism by visualizing a knowledge space that the experts can use to analyze and classify the new images. Additionally, it provides confidence in the classification results while highlighting each important image fragment, with the benefit of reducing the number of false negatives. An exemplary case is when an image detail is indicated, with small confidence, as malignant in an image globally classified as benign. Another implemented feature is the proposal of additional labelled image tiles sharing the same characteristics to specify the context of the output decision. The proposed system was tested using three histopathology image datasets, obtaining the accuracy of the state-of-the-art black-box methods based on deep learning neural networks. Differently from the methodologies proposed so far for the same purpose, this paper introduces a novel explainable method for medical image analysis where the advantages of the deep learning neural networks used to build the embedding space for the image tiles are combined with the intrinsic explainability of the granular process obtained using the clustering property of a selforganizing map

    Pam16 and Pam18 were repurposed during Trypanosoma brucei evolution to regulate the replication of mitochondrial DNA.

    Get PDF
    Protein import and genome replication are essential processes for mitochondrial biogenesis and propagation. The J-domain proteins Pam16 and Pam18 regulate the presequence translocase of the mitochondrial inner membrane. In the protozoan Trypanosoma brucei, their counterparts are TbPam16 and TbPam18, which are essential for the procyclic form (PCF) of the parasite, though not involved in mitochondrial protein import. Here, we show that during evolution, the 2 proteins have been repurposed to regulate the replication of maxicircles within the intricate kDNA network, the most complex mitochondrial genome known. TbPam18 and TbPam16 have inactive J-domains suggesting a function independent of heat shock proteins. However, their single transmembrane domain is essential for function. Pulldown of TbPam16 identifies a putative client protein, termed MaRF11, the depletion of which causes the selective loss of maxicircles, akin to the effects observed for TbPam18 and TbPam16. Moreover, depletion of the mitochondrial proteasome results in increased levels of MaRF11. Thus, we have discovered a protein complex comprising TbPam18, TbPam16, and MaRF11, that controls maxicircle replication. We propose a working model in which the matrix protein MaRF11 functions downstream of the 2 integral inner membrane proteins TbPam18 and TbPam16. Moreover, we suggest that the levels of MaRF11 are controlled by the mitochondrial proteasome

    Medication-Related Osteonecrosis of the Jaw: A Cross-Sectional Survey among Urologists in Switzerland, Germany, and Austria

    No full text
    Medication-related osteonecrosis of the jaw (MRONJ) is a potentially preventable adverse side effect of mainly antiresorptive drugs. MRONJ is expected to become a growing clinical problem due to the aging population and the increasing number of patients requiring antiresorptive agents. Knowledge and awareness about MRONJ and elimination of the oral and dental risk factors before starting antiresorptive therapy (AR) are fundamental to reducing the incidence of MRONJ. In urology, ARs are used primarily in patients suffering from bone metastases due to prostate cancer and to prevent cancer-treatment-induced bone loss (CTIBL) in prostate cancer patients receiving endocrine therapy. This postal survey aimed to evaluate disease-related knowledge and awareness about implementing oral examinations for patients starting AR among Swiss, German, and Austrian urologists. A total of 176 urologists returned the completed questionnaire, yielding a response rate of 11.7%. Of the respondents, 44.9% (n = 79) and 24.4% (n = 43) stated that they give more than five first-time prescriptions of denosumab and of intravenous or oral bisphosphonates per year, respectively. Only 14.8% (n = 26) of the participating urologists had never encountered MRONJ cases related to BPs. Of the participants, 89.8% (n = 158) had implemented referrals to dentists for oral examination before initiating AR. The mean percentage of correct answers regarding the knowledge about MRONJ was 70.9% ± 11.2%. In contrast to previous surveys on MRONJ among physicians, this study showed that the participating urologists were sufficiently informed about MRONJ, as reflected by the high number of participants implementing preventive dental screenings

    Deep Metric Learning for Transparent Classification of Covid-19 X-Ray Images

    No full text
    This work proposes an interpretable classifier for automatic Covid-19 classification using chest X-ray images. It is based on a deep learning model, in particular, a triplet network, devoted to finding an effective image embedding. Such embedding is a non-linear projection of the images into a space of reduced dimension, where homogeneity and separation of the classes measured by a predefined metric are improved. A K-Nearest Neighbor classifier is the interpretable model used for the final classification. Results on public datasets show that the proposed methodology can reach comparable results with state of the art in terms of accuracy, with the advantage of providing interpretability to the classification, a characteristic which can be very useful in the medical domain, e.g. in a decision support system

    Breast Cancer Histologic Grade Identification by Graph Neural Network Embeddings

    No full text
    Deep neural networks are nowadays state-of-the-art methodologies for general-purpose image classification. As a consequence, such approaches are also employed in the context of histopathology biopsy image classification. This specific task is usually performed by separating the image into patches, giving them as input to the Deep Model and evaluating the single sub-part outputs. This approach has the main drawback of not considering the global structure of the input image and can lead to avoiding the discovery of relevant patterns among non-overlapping patches. Differently from this commonly adopted assumption, in this paper, we propose to face the problem by representing the input into a proper embedding resulting from a graph representation built from the tissue regions of the image. This graph representation is capable of maintaining the image structure and considering the relations among its relevant parts. The effectiveness of this representation is shown in the case of automatic tumor grading identification of breast cancer, using public available datasets

    Deep Metric Learning for Histopathological Image Classification

    No full text
    Neural networks demonstrated to be effective in multiple classification tasks with performances that are similar to human capabilities. Notwithstanding, the viability of the application of this kind of tool in real cases passes through the possibility to interpret the provided results and let the human operator take his decision according to the information that is provided. This aspect is much more evident when the field of application is bound to people's health as for biomed-ical image classification. We propose for the classification of histopathological images a convolutional neural network that, through metric learning, learns a representation that gathers in homogeneous clusters the labeled samples according to their characteristics. This representation, beyond improving the classification performance, also provides for the new test image sets of previously labeled samples that can be inspected to support the labeling decision. The technique has been tested on the LC25000 dataset that collects lung and colon histopathological images and with the Epistroma dataset. The source code is available at https://github.com/Calder10/Epistroma LC25000-Classification

    Leveraging Deep Embeddings for Explainable Medical Image Analysis

    No full text
    Machine learning techniques applied to the medical image analysis domain provide valuable tools that improve the diagnostic process. Among the proposed machine learning methodologies, deep neural networks are state-of-the-art in medical domain applications. However, they still have the disadvantage of being black-box methods. On the other hand, the medical field requires approaches that propose decisions based on an explainable mechanism, providing meaningful suggestions to physicians. In this chapter, we propose a general paradigm for an explainable classification of medical imaging data. This paradigm adopts deep metric learning to provide an embedding that enables the representation of images in either two or three dimensions. Metric learning plus dimensionality reduction in 2-3D introduces a first level of explainability. In particular, this is achieved by showing the training images closer to the test ones, consequently allowing for neighbour identification. A subsequent level of explainability is added by an interpretable classifier. This chapter will also present four use cases demonstrating the application of the proposed paradigm, each related to a specific kind of image dataset, such as histopathological or X-ray images
    corecore