132 research outputs found

    Fast, Simple Calcium Imaging Segmentation with Fully Convolutional Networks

    Full text link
    Calcium imaging is a technique for observing neuron activity as a series of images showing indicator fluorescence over time. Manually segmenting neurons is time-consuming, leading to research on automated calcium imaging segmentation (ACIS). We evaluated several deep learning models for ACIS on the Neurofinder competition datasets and report our best model: U-Net2DS, a fully convolutional network that operates on 2D mean summary images. U-Net2DS requires minimal domain-specific pre/post-processing and parameter adjustment, and predictions are made on full 512×512512\times512 images at ≈\approx9K images per minute. It ranks third in the Neurofinder competition (F1=0.569F_1=0.569) and is the best model to exclusively use deep learning. We also demonstrate useful segmentations on data from outside the competition. The model's simplicity, speed, and quality results make it a practical choice for ACIS and a strong baseline for more complex models in the future.Comment: Accepted to 3rd Workshop on Deep Learning in Medical Image Analysis (http://cs.adelaide.edu.au/~dlmia3/

    3DQ: Compact Quantized Neural Networks for Volumetric Whole Brain Segmentation

    Full text link
    Model architectures have been dramatically increasing in size, improving performance at the cost of resource requirements. In this paper we propose 3DQ, a ternary quantization method, applied for the first time to 3D Fully Convolutional Neural Networks (F-CNNs), enabling 16x model compression while maintaining performance on par with full precision models. We extensively evaluate 3DQ on two datasets for the challenging task of whole brain segmentation. Additionally, we showcase our method's ability to generalize on two common 3D architectures, namely 3D U-Net and V-Net. Outperforming a variety of baselines, the proposed method is capable of compressing large 3D models to a few MBytes, alleviating the storage needs in space critical applications.Comment: Accepted to MICCAI 201

    Volumetric Attention for 3D Medical Image Segmentation and Detection

    Full text link
    A volumetric attention(VA) module for 3D medical image segmentation and detection is proposed. VA attention is inspired by recent advances in video processing, enables 2.5D networks to leverage context information along the z direction, and allows the use of pretrained 2D detection models when training data is limited, as is often the case for medical applications. Its integration in the Mask R-CNN is shown to enable state-of-the-art performance on the Liver Tumor Segmentation (LiTS) Challenge, outperforming the previous challenge winner by 3.9 points and achieving top performance on the LiTS leader board at the time of paper submission. Detection experiments on the DeepLesion dataset also show that the addition of VA to existing object detectors enables a 69.1 sensitivity at 0.5 false positive per image, outperforming the best published results by 6.6 points.Comment: Accepted by MICCAI 201

    Tversky loss function for image segmentation using 3D fully convolutional deep networks

    Full text link
    Fully convolutional deep neural networks carry out excellent potential for fast and accurate image segmentation. One of the main challenges in training these networks is data imbalance, which is particularly problematic in medical imaging applications such as lesion segmentation where the number of lesion voxels is often much lower than the number of non-lesion voxels. Training with unbalanced data can lead to predictions that are severely biased towards high precision but low recall (sensitivity), which is undesired especially in medical applications where false negatives are much less tolerable than false positives. Several methods have been proposed to deal with this problem including balanced sampling, two step training, sample re-weighting, and similarity loss functions. In this paper, we propose a generalized loss function based on the Tversky index to address the issue of data imbalance and achieve much better trade-off between precision and recall in training 3D fully convolutional deep neural networks. Experimental results in multiple sclerosis lesion segmentation on magnetic resonance images show improved F2 score, Dice coefficient, and the area under the precision-recall curve in test data. Based on these results we suggest Tversky loss function as a generalized framework to effectively train deep neural networks

    Automatic Brain Tumor Segmentation using Convolutional Neural Networks with Test-Time Augmentation

    Get PDF
    Automatic brain tumor segmentation plays an important role for diagnosis, surgical planning and treatment assessment of brain tumors. Deep convolutional neural networks (CNNs) have been widely used for this task. Due to the relatively small data set for training, data augmentation at training time has been commonly used for better performance of CNNs. Recent works also demonstrated the usefulness of using augmentation at test time, in addition to training time, for achieving more robust predictions. We investigate how test-time augmentation can improve CNNs' performance for brain tumor segmentation. We used different underpinning network structures and augmented the image by 3D rotation, flipping, scaling and adding random noise at both training and test time. Experiments with BraTS 2018 training and validation set show that test-time augmentation helps to improve the brain tumor segmentation accuracy and obtain uncertainty estimation of the segmentation results.Comment: 12 pages, 3 figures, MICCAI BrainLes 201

    Early growth performances of various seed sources of black (Prunus serotina Erhr.) and wild cherry (Prunus avium L.) seedlings on low and high elevation sites in the western Black Sea Region of Turkey

    Get PDF
    The growth performances of one-year old seedlings of various black cherry (BC) and wild cherry (WC) seed sources (SSs) that were planted on low elevation sites (LES) and high elevation sites (HES) in the western Black Sea Region (BSR) of Turkey were assessed one and five years after planting (YAP). Significance between and within-species variations were found for seedling growth. On species basis, WC was superior to BC for seedling groundline diameter and height growth for the low elevation sites(LES) of one and five years after planting (YAP), whereas no substantial survival and growth differences were found between the species for the high elevation sites (HES) of five YAP. Generally, seedlings averaged a greater survival on the LES, when compared with those on the HES. Local WC SSs (Tefen, Yayla and Dirgine) demonstrated an enhanced seedling survival and growth on LES than the other SSs. Unlike the LES results, a collection of BC (Michigan 1 and Ukraine) and WC SSs (Dirgine, Germany, and Tefen) displayed the best seedling growth over five years. The HES seedlings frequently experienced diebacks and forking due to heavy snow fall and wildlife browsing. Selection of the local WC SSs was vital for the LES. However, BC SSs may present a potential for planting on the HES with harsher environmental conditions.Keywords: Black cherry, provenance test, seedling growth and survival, wild cherry

    Uncertainty quantification in medical image segmentation with normalizing flows

    Full text link
    Medical image segmentation is inherently an ambiguous task due to factors such as partial volumes and variations in anatomical definitions. While in most cases the segmentation uncertainty is around the border of structures of interest, there can also be considerable inter-rater differences. The class of conditional variational autoencoders (cVAE) offers a principled approach to inferring distributions over plausible segmentations that are conditioned on input images. Segmentation uncertainty estimated from samples of such distributions can be more informative than using pixel level probability scores. In this work, we propose a novel conditional generative model that is based on conditional Normalizing Flow (cFlow). The basic idea is to increase the expressivity of the cVAE by introducing a cFlow transformation step after the encoder. This yields improved approximations of the latent posterior distribution, allowing the model to capture richer segmentation variations. With this we show that the quality and diversity of samples obtained from our conditional generative model is enhanced. Performance of our model, which we call cFlow Net, is evaluated on two medical imaging datasets demonstrating substantial improvements in both qualitative and quantitative measures when compared to a recent cVAE based model.Comment: 12 pages. Accepted to be presented at 11th International Workshop on Machine Learning in Medical Imaging. Source code will be updated at https://github.com/raghavian/cFlo

    Learning Shape Priors for Robust Cardiac MR Segmentation from Multi-view Images

    Get PDF
    © 2019, Springer Nature Switzerland AG. Cardiac MR image segmentation is essential for the morphological and functional analysis of the heart. Inspired by how experienced clinicians assess the cardiac morphology and function across multiple standard views (i.e. long- and short-axis views), we propose a novel approach which learns anatomical shape priors across different 2D standard views and leverages these priors to segment the left ventricular (LV) myocardium from short-axis MR image stacks. The proposed segmentation method has the advantage of being a 2D network but at the same time incorporates spatial context from multiple, complementary views that span a 3D space. Our method achieves accurate and robust segmentation of the myocardium across different short-axis slices (from apex to base), outperforming baseline models (e.g. 2D U-Net, 3D U-Net) while achieving higher data efficiency. Compared to the 2D U-Net, the proposed method reduces the mean Hausdorff distance (mm) from 3.24 to 2.49 on the apical slices, from 2.34 to 2.09 on the middle slices and from 3.62 to 2.76 on the basal slices on the test set, when only 10% of the training data was used

    Recurrent Neural Networks for Aortic Image Sequence Segmentation with Sparse Annotations

    Get PDF
    Segmentation of image sequences is an important task in medical image analysis, which enables clinicians to assess the anatomy and function of moving organs. However, direct application of a segmentation algorithm to each time frame of a sequence may ignore the temporal continuity inherent in the sequence. In this work, we propose an image sequence segmentation algorithm by combining a fully convolutional network with a recurrent neural network, which incorporates both spatial and temporal information into the segmentation task. A key challenge in training this network is that the available manual annotations are temporally sparse, which forbids end-to-end training. We address this challenge by performing non-rigid label propagation on the annotations and introducing an exponentially weighted loss function for training. Experiments on aortic MR image sequences demonstrate that the proposed method significantly improves both accuracy and temporal smoothness of segmentation, compared to a baseline method that utilises spatial information only. It achieves an average Dice metric of 0.960 for the ascending aorta and 0.953 for the descending aorta

    Surface agnostic metrics for cortical volume segmentation and regression

    Get PDF
    The cerebral cortex performs higher-order brain functions and is thus implicated in a range of cognitive disorders. Current analysis of cortical variation is typically performed by fitting surface mesh models to inner and outer cortical boundaries and investigating metrics such as surface area and cortical curvature or thickness. These, however, take a long time to run, and are sensitive to motion and image and surface resolution, which can prohibit their use in clinical settings. In this paper, we instead propose a machine learning solution, training a novel architecture to predict cortical thickness and curvature metrics from T2 MRI images, while additionally returning metrics of prediction uncertainty. Our proposed model is tested on a clinical cohort (Down Syndrome) for which surface-based modelling often fails. Results suggest that deep convolutional neural networks are a viable option to predict cortical metrics across a range of brain development stages and pathologies
    • …
    corecore