483 research outputs found

    Recursive Training of 2D-3D Convolutional Networks for Neuronal Boundary Detection

    Full text link
    Efforts to automate the reconstruction of neural circuits from 3D electron microscopic (EM) brain images are critical for the field of connectomics. An important computation for reconstruction is the detection of neuronal boundaries. Images acquired by serial section EM, a leading 3D EM technique, are highly anisotropic, with inferior quality along the third dimension. For such images, the 2D max-pooling convolutional network has set the standard for performance at boundary detection. Here we achieve a substantial gain in accuracy through three innovations. Following the trend towards deeper networks for object recognition, we use a much deeper network than previously employed for boundary detection. Second, we incorporate 3D as well as 2D filters, to enable computations that use 3D context. Finally, we adopt a recursively trained architecture in which a first network generates a preliminary boundary map that is provided as input along with the original image to a second network that generates a final boundary map. Backpropagation training is accelerated by ZNN, a new implementation of 3D convolutional networks that uses multicore CPU parallelism for speed. Our hybrid 2D-3D architecture could be more generally applicable to other types of anisotropic 3D images, including video, and our recursive framework for any image labeling problem

    Testing Convolutional Neural Networks for finding strong gravitational lenses in KiDS

    Get PDF
    Convolutional Neural Networks (ConvNets) are one of the most promising methods for identifying strong gravitational lens candidates in survey data. We present two ConvNet lens-finders which we have trained with a dataset composed of real galaxies from the Kilo Degree Survey (KiDS) and simulated lensed sources. One ConvNet is trained with single \textit{r}-band galaxy images, hence basing the classification mostly on the morphology. While the other ConvNet is trained on \textit{g-r-i} composite images, relying mostly on colours and morphology. We have tested the ConvNet lens-finders on a sample of 21789 Luminous Red Galaxies (LRGs) selected from KiDS and we have analyzed and compared the results with our previous ConvNet lens-finder on the same sample. The new lens-finders achieve a higher accuracy and completeness in identifying gravitational lens candidates, especially the single-band ConvNet. Our analysis indicates that this is mainly due to improved simulations of the lensed sources. In particular, the single-band ConvNet can select a sample of lens candidates with ∼40%\sim40\% purity, retrieving 3 out of 4 of the confirmed gravitational lenses in the LRG sample. With this particular setup and limited human intervention, it will be possible to retrieve, in future surveys such as Euclid, a sample of lenses exceeding in size the total number of currently known gravitational lenses.Comment: 16 pages, 10 figures. Accepted for publication in MNRA

    Seeing into Darkness: Scotopic Visual Recognition

    Get PDF
    Images are formed by counting how many photons traveling from a given set of directions hit an image sensor during a given time interval. When photons are few and far in between, the concept of `image' breaks down and it is best to consider directly the flow of photons. Computer vision in this regime, which we call `scotopic', is radically different from the classical image-based paradigm in that visual computations (classification, control, search) have to take place while the stream of photons is captured and decisions may be taken as soon as enough information is available. The scotopic regime is important for biomedical imaging, security, astronomy and many other fields. Here we develop a framework that allows a machine to classify objects with as few photons as possible, while maintaining the error rate below an acceptable threshold. A dynamic and asymptotically optimal speed-accuracy tradeoff is a key feature of this framework. We propose and study an algorithm to optimize the tradeoff of a convolutional network directly from lowlight images and evaluate on simulated images from standard datasets. Surprisingly, scotopic systems can achieve comparable classification performance as traditional vision systems while using less than 0.1% of the photons in a conventional image. In addition, we demonstrate that our algorithms work even when the illuminance of the environment is unknown and varying. Last, we outline a spiking neural network coupled with photon-counting sensors as a power-efficient hardware realization of scotopic algorithms.Comment: 23 pages, 6 figure

    Convolutional Neural Network based Malignancy Detection of Pulmonary Nodule on Computer Tomography

    Get PDF
    Without performing biopsy that could lead physical damages to nerves and vessels, Computerized Tomography (CT) is widely used to diagnose the lung cancer due to the high sensitivity of pulmonary nodule detection. However, distinguishing pulmonary nodule in-between malignant and benign is still not an easy task. As the CT scans are mostly in relatively low resolution, it is not easy for radiologists to read the details of the scan image. In the past few years, the continuing rapid growth of CT scan analysis system has generated a pressing need for advanced computational tools to extract useful features to assist the radiologist in reading progress. Computer-aided detection (CAD) systems have been developed to reduce observational oversights by identifying the suspicious features that a radiologist looks for during case review. Most previous CAD systems rely on low-level non-texture imaging features such as intensity, shape, size or volume of the pulmonary nodules. However, the pulmonary nodules have a wide variety in shapes and sizes, and also the high visual similarities between benign and malignant patterns, so relying on non-texture imaging features is difficult for diagnosis of the nodule types. To overcome the problem of non-texture imaging features, more recent CAD systems adopted the supervised or unsupervised learning scheme to translate the content of the nodules into discriminative features. Such features enable high-level imaging features highly correlated with shape and texture. Convolutional neural networks (ConvNets), supervised methods related to deep learning, have been improved rapidly in recent years. Due to their great success in computer vision tasks, they are also expected to be helpful in medical imaging. In this thesis, a CAD based on a deep convolutional neural network (ConvNet) is designed and evaluated for malignant pulmonary nodules on computerized tomography. The proposed ConvNet, which is the core component of the proposed CAD system, is trained on the LUNGx challenge database to classify benign and malignant pulmonary nodules on CT. The architecture of the proposed ConvNet consists of 3 convolutional layers with maximum pooling operations and rectified linear units (ReLU) activations, followed by 2 denser layers with full-connectivities, and the architecture is carefully tailored for pulmonary nodule classification by considering the problems of over-fitting, receptive field, and imbalanced data. The proposed CAD system achieved the sensitivity of 0.896 and specificity of 8.78 at the optimal cut-off point of the receiver operating characteristic curve (ROC) with the area under the curve (AUC) of 0.920. The testing results showed that the proposed ConvNet achieves 10% higher AUC compared to the state-of-the-art work related to the unsupervised method. By integrating the proposed highly accurate ConvNet, the proposed CAD system also outperformed the other state-of-the-art ConvNets explicitly designed for diagnosis of pulmonary nodules detection or classification
    • …
    corecore