Learning image context for segmentation of the prostate in CT-guided radiotherapy

Abstract

Accurate segmentation of prostate is the key to the success of external beam radiotherapy of prostate cancer. However, accurate segmentation of prostate in computer tomography (CT) images remains challenging mainly due to three factors: (1) low image contrast between the prostate and its surrounding tissues, (2) unpredictable prostate motion across different treatment days, and (3) large variations of intensities and shapes of bladder and rectum around the prostate. In this paper, an online-learning and patient-specific classification method based on the location-adaptive image context is presented to deal with all these challenging issues and achieve the precise segmentation of prostate in CT images. Specifically, two sets of location-adaptive classifiers are placed, respectively, along the two coordinate directions of the planning image space of a patient, and further trained with the planning image and also the previous-segmented treatment images of the same patient to jointly perform prostate segmentation for a new treatment image (of the same patient). In particular, each location-adaptive classifier, which itself consists of a set of sequential sub-classifiers, is recursively trained with both the static image appearance features and the iteratively-updated image context features (extracted at different scales and orientations) for better identification of each prostate region. The proposed learning-based prostate segmentation method has been extensively evaluated on 161 images of 11 patients, each with more than 9 daily treatment 3D CT images. Our method achieves the mean Dice value 0.908 and the mean ± SD of average surface distance (ASD) value 1.40 ± 0.57 mm. Its performance is also compared with three prostate segmentation methods, indicating the best segmentation accuracy by the proposed method among all methods under comparison

    Similar works