1,296 research outputs found

    User Interaction in Semi-Automatic Segmentation of Organs at Risk: a Case Study in Radiotherapy

    Get PDF
    Accurate segmentation of organs at risk is an important step in radiotherapy planning. Manual segmentation being a tedious procedure and prone to inter- and intra-observer variability, there is a growing interest in automated segmentation methods. However, automatic methods frequently fail to provide satisfactory result, and post-processing corrections are often needed. Semi-automatic segmentation methods are designed to overcome these problems by combining physicians’ expertise and computers’ potential. This study evaluates two semi-automatic segmentation methods with different types of user interactions, named the “strokes” and the “contour”, to provide insights into the role and impact of human-computer interaction. Two physicians participated in the experiment. In total, 42 case studies were carried out on five different types of organs at risk. For each case study, both the human-computer interaction process and quality of the segmentation results were measured subjectively and objectively. Furthermore, different measures of the process and the results were correlated. A total of 36 quantifiable and ten non-quantifiable correlations were identified for each type of interaction. Among those pairs of measures, 20 of the contour method and 22 of the strokes method were strongly or moderately correlated, either directly or inversely. Based on those correlated measures, it is concluded that: (1) in the design of semi-automatic segmentation methods, user interactions need to be less cognitively challenging; (2) based on the observed workflows and preferences of physicians, there is a need for flexibility in the interface design; (3) the correlated measures provide insights that can be used in improving user interaction design

    Dynamically balanced online random forests for interactive scribble-based segmentation

    Get PDF
    Interactive scribble-and-learning-based segmentation is attractive for its good performance and reduced number of user interaction. Scribbles for foreground and background are often imbalanced. With the arrival of new scribbles,the imbalance ratio may change largely. Failing to deal with imbalanced training data and a changing imbalance ratio may lead to a decreased sensitivity and accuracy for segmentation. We propose a generic Dynamically Balanced Online Random Forest (DyBa ORF) to deal with these problems,with a combination of a dynamically balanced online Bagging method and a tree growing and shrinking strategy to update the random forests. We validated DyBa ORF on UCI machine learning data sets and applied it to two different clinical applications: 2D segmentation of the placenta from fetal MRI and adult lungs from radiographic images. Experiments show it outperforms traditional ORF in dealing with imbalanced data with a changing imbalance ratio,while maintaining a comparable accuracy and a higher efficiency compared with its offline counterpart. Our results demonstrate that DyBa ORF is more suitable than existing ORF for learning-based interactive image segmentation

    Active Contours and Image Segmentation: The Current State Of the Art

    Get PDF
    Image segmentation is a fundamental task in image analysis responsible for partitioning an image into multiple sub-regions based on a desired feature. Active contours have been widely used as attractive image segmentation methods because they always produce sub-regions with continuous boundaries, while the kernel-based edge detection methods, e.g. Sobel edge detectors, often produce discontinuous boundaries. The use of level set theory has provided more flexibility and convenience in the implementation of active contours. However, traditional edge-based active contour models have been applicable to only relatively simple images whose sub-regions are uniform without internal edges. Here in this paper we attempt to brief the taxonomy and current state of the art in Image segmentation and usage of Active Contours

    Extreme clicking for efficient object annotation

    Get PDF
    Manually annotating object bounding boxes is central to building computer vision datasets, and it is very time consuming (annotating ILSVRC [53] took 35s for one high-quality box [62]). It involves clicking on imaginary corners of a tight box around the object. This is difficult as these corners are often outside the actual object and several adjustments are required to obtain a tight box. We propose extreme clicking instead: we ask the annotator to click on four physical points on the object: the top, bottom, left- and right-most points. This task is more natural and these points are easy to find. We crowd-source extreme point annotations for PASCAL VOC 2007 and 2012 and show that (1) annotation time is only 7s per box, 5x faster than the traditional way of drawing boxes [62]; (2) the quality of the boxes is as good as the original ground-truth drawn the traditional way; (3) detectors trained on our annotations are as accurate as those trained on the original ground-truth. Moreover, our extreme clicking strategy not only yields box coordinates, but also four accurate boundary points. We show (4) how to incorporate them into GrabCut to obtain more accurate segmentations than those delivered when initializing it from bounding boxes; (5) semantic segmentations models trained on these segmentations outperform those trained on segmentations derived from bounding boxes.Comment: ICCV 201
    • 

    corecore