3,440 research outputs found

    A Survey on Deep Learning-based Architectures for Semantic Segmentation on 2D images

    Full text link
    Semantic segmentation is the pixel-wise labelling of an image. Since the problem is defined at the pixel level, determining image class labels only is not acceptable, but localising them at the original image pixel resolution is necessary. Boosted by the extraordinary ability of convolutional neural networks (CNN) in creating semantic, high level and hierarchical image features; excessive numbers of deep learning-based 2D semantic segmentation approaches have been proposed within the last decade. In this survey, we mainly focus on the recent scientific developments in semantic segmentation, specifically on deep learning-based methods using 2D images. We started with an analysis of the public image sets and leaderboards for 2D semantic segmantation, with an overview of the techniques employed in performance evaluation. In examining the evolution of the field, we chronologically categorised the approaches into three main periods, namely pre-and early deep learning era, the fully convolutional era, and the post-FCN era. We technically analysed the solutions put forward in terms of solving the fundamental problems of the field, such as fine-grained localisation and scale invariance. Before drawing our conclusions, we present a table of methods from all mentioned eras, with a brief summary of each approach that explains their contribution to the field. We conclude the survey by discussing the current challenges of the field and to what extent they have been solved.Comment: Updated with new studie

    Recurrent Segmentation for Variable Computational Budgets

    Full text link
    State-of-the-art systems for semantic image segmentation use feed-forward pipelines with fixed computational costs. Building an image segmentation system that works across a range of computational budgets is challenging and time-intensive as new architectures must be designed and trained for every computational setting. To address this problem we develop a recurrent neural network that successively improves prediction quality with each iteration. Importantly, the RNN may be deployed across a range of computational budgets by merely running the model for a variable number of iterations. We find that this architecture is uniquely suited for efficiently segmenting videos. By exploiting the segmentation of past frames, the RNN can perform video segmentation at similar quality but reduced computational cost compared to state-of-the-art image segmentation methods. When applied to static images in the PASCAL VOC 2012 and Cityscapes segmentation datasets, the RNN traces out a speed-accuracy curve that saturates near the performance of state-of-the-art segmentation methods

    A token-mixer architecture for CAD-RADS classification of coronary stenosis on multiplanar reconstruction CT images

    Get PDF
    Background and objective: In patients with suspected Coronary Artery Disease (CAD), the severity of stenosis needs to be assessed for precise clinical management. An automatic deep learning-based algorithm to classify coronary stenosis lesions according to the Coronary Artery Disease Reporting and Data System (CAD-RADS) in multiplanar reconstruction images acquired with Coronary Computed Tomography Angiography (CCTA) is proposed. Methods: In this retrospective study, 288 patients with suspected CAD who underwent CCTA scans were included. To model long-range semantic information, which is needed to identify and classify stenosis with challenging appearance, we adopted a token-mixer architecture (ConvMixer), which can learn structural relationship over the whole coronary artery. ConvMixer consists of a patch embedding layer followed by repeated convolutional blocks to enable the algorithm to learn long-range dependences between pixels. To visually assess ConvMixer performance, Gradient-Weighted Class Activation Mapping (Grad-CAM) analysis was used. Results: Experimental results using 5-fold cross-validation showed that our ConvMixer can classify significant coronary artery stenosis (i.e., stenosis with luminal narrowing ≥50%) with accuracy and sensitivity of 87% and 90%, respectively. For CAD-RADS 0 vs. 1–2 vs. 3–4 vs. 5 classification, ConvMixer achieved accuracy and sensitivity of 72% and 75%, respectively. Additional experiments showed that ConvMixer achieved a better trade-off between performance and complexity compared to pyramid-shaped convolutional neural networks. Conclusions: Our algorithm might provide clinicians with decision support, potentially reducing the interobserver variability for coronary artery stenosis evaluation

    Spinal cord gray matter segmentation using deep dilated convolutions

    Get PDF
    Gray matter (GM) tissue changes have been associated with a wide range of neurological disorders and was also recently found relevant as a biomarker for disability in amyotrophic lateral sclerosis. The ability to automatically segment the GM is, therefore, an important task for modern studies of the spinal cord. In this work, we devise a modern, simple and end-to-end fully automated human spinal cord gray matter segmentation method using Deep Learning, that works both on in vivo and ex vivo MRI acquisitions. We evaluate our method against six independently developed methods on a GM segmentation challenge and report state-of-the-art results in 8 out of 10 different evaluation metrics as well as major network parameter reduction when compared to the traditional medical imaging architectures such as U-Nets.Comment: 13 pages, 8 figure
    • …
    corecore