46 research outputs found

    RADNET: Radiologist Level Accuracy using Deep Learning for HEMORRHAGE detection in CT Scans

    Full text link
    We describe a deep learning approach for automated brain hemorrhage detection from computed tomography (CT) scans. Our model emulates the procedure followed by radiologists to analyse a 3D CT scan in real-world. Similar to radiologists, the model sifts through 2D cross-sectional slices while paying close attention to potential hemorrhagic regions. Further, the model utilizes 3D context from neighboring slices to improve predictions at each slice and subsequently, aggregates the slice-level predictions to provide diagnosis at CT level. We refer to our proposed approach as Recurrent Attention DenseNet (RADnet) as it employs original DenseNet architecture along with adding the components of attention for slice level predictions and recurrent neural network layer for incorporating 3D context. The real-world performance of RADnet has been benchmarked against independent analysis performed by three senior radiologists for 77 brain CTs. RADnet demonstrates 81.82% hemorrhage prediction accuracy at CT level that is comparable to radiologists. Further, RADnet achieves higher recall than two of the three radiologists, which is remarkable.Comment: Accepted at IEEE Symposium on Biomedical Imaging (ISBI) 2018 as conference pape

    Synaptic Partner Assignment Using Attentional Voxel Association Networks

    Full text link
    Connectomics aims to recover a complete set of synaptic connections within a dataset imaged by volume electron microscopy. Many systems have been proposed for locating synapses, and recent research has included a way to identify the synaptic partners that communicate at a synaptic cleft. We re-frame the problem of identifying synaptic partners as directly generating the mask of the synaptic partners from a given cleft. We train a convolutional network to perform this task. The network takes the local image context and a binary mask representing a single cleft as input. It is trained to produce two binary output masks: one which labels the voxels of the presynaptic partner within the input image, and another similar labeling for the postsynaptic partner. The cleft mask acts as an attentional gating signal for the network. We find that an implementation of this approach performs well on a dataset of mouse somatosensory cortex, and evaluate it as part of a combined system to predict both clefts and connections

    Synaptic Cleft Segmentation in Non-Isotropic Volume Electron Microscopy of the Complete Drosophila Brain

    Full text link
    Neural circuit reconstruction at single synapse resolution is increasingly recognized as crucially important to decipher the function of biological nervous systems. Volume electron microscopy in serial transmission or scanning mode has been demonstrated to provide the necessary resolution to segment or trace all neurites and to annotate all synaptic connections. Automatic annotation of synaptic connections has been done successfully in near isotropic electron microscopy of vertebrate model organisms. Results on non-isotropic data in insect models, however, are not yet on par with human annotation. We designed a new 3D-U-Net architecture to optimally represent isotropic fields of view in non-isotropic data. We used regression on a signed distance transform of manually annotated synaptic clefts of the CREMI challenge dataset to train this model and observed significant improvement over the state of the art. We developed open source software for optimized parallel prediction on very large volumetric datasets and applied our model to predict synaptic clefts in a 50 tera-voxels dataset of the complete Drosophila brain. Our model generalizes well to areas far away from where training data was available

    The seven donkeys: Super A.I. performance in animal categorization by an immature Human brain

    Get PDF
    This paper reports image categorization performance exhibited by an immature Human brain, that beats current state-of-the art convolutional networks with regards to the training procedure (limited size of the training set and limited training budget). This observation highlights the limits of the current A.I. trend for backpropagation-trained neural networks dedicated to computer vision, as well as its differences with natural neural networks. Based on the identified limitations, I then introduces a new image categorization challenge (the seven donkey challenge)
    corecore