20 research outputs found

    Stacked fully convolutional networks with multi-channel learning: application to medical image segmentation

    Get PDF
    The automated segmentation of regions of interest (ROIs) in medical imaging is the fundamental requirement for the derivation of high-level semantics for image analysis in clinical decision support systems. Traditional segmentation approaches such as region-based depend heavily upon hand-crafted features and a priori knowledge of the user. As such, these methods are difficult to adopt within a clinical environment. Recently, methods based on fully convolutional networks (FCN) have achieved great success in the segmentation of general images. FCNs leverage a large labeled dataset to hierarchically learn the features that best correspond to the shallow appearance as well as the deep semantics of the images. However, when applied to medical images, FCNs usually produce coarse ROI detection and poor boundary definitions primarily due to the limited number of labeled training data and limited constraints of label agreement among neighboring similar pixels. In this paper, we propose a new stacked FCN architecture with multi-channel learning (SFCN-ML). We embed the FCN in a stacked architecture to learn the foreground ROI features and background non-ROI features separately and then integrate these different channels to produce the final segmentation result. In contrast to traditional FCN methods, our SFCN-ML architecture enables the visual attributes and semantics derived from both the fore- and background channels to be iteratively learned and inferred. We conducted extensive experiments on three public datasets with a variety of visual challenges. Our results show that our SFCN-ML is more effective and robust than a routine FCN and its variants, and other state-of-the-art methods

    Differentiation between Pancreatic Ductal Adenocarcinoma and Normal Pancreatic Tissue for Treatment Response Assessment using Multi-Scale Texture Analysis of CT Images

    Get PDF
    Background: Pancreatic ductal adenocarcinoma (PDAC) is the most prevalent type of pancreas cancer with a high mortality rate and its staging is highly dependent on the extent of involvement between the tumor and surrounding vessels, facilitating treatment response assessment in PDAC. Objective: This study aims at detecting and visualizing the tumor region and the surrounding vessels in PDAC CT scan since, despite the tumors in other abdominal organs, clear detection of PDAC is highly difficult. Material and Methods: This retrospective study consists of three stages: 1) a patch-based algorithm for differentiation between tumor region and healthy tissue using multi-scale texture analysis along with L1-SVM (Support Vector Machine) classifier, 2) a voting-based approach, developed on a standard logistic function, to mitigate false detections, and 3) 3D visualization of the tumor and the surrounding vessels using ITK-SNAP software. Results: The results demonstrate that multi-scale texture analysis strikes a balance between recall and precision in tumor and healthy tissue differentiation with an overall accuracy of 0.78±0.12 and a sensitivity of 0.90±0.09 in PDAC. Conclusion: Multi-scale texture analysis using statistical and wavelet-based features along with L1-SVM can be employed to differentiate between healthy and pancreatic tissues. Besides, 3D visualization of the tumor region and surrounding vessels can facilitate the assessment of treatment response in PDAC. However, the 3D visualization software must be further developed for integrating with clinical applications
    corecore