1 research outputs found

    Segmenting TRUS Video Sequences Using Local Shape Statistics

    No full text
    Automatic segmentation of the prostate in transrectal ultrasound (TRUS) may improve the fusion of TRUS with magnetic resonance imaging (MRI) for TRUS/MRI-guided prostate biopsy and local therapy. It is very challenging to segment the prostate in TRUS images, especially for the base and apex of the prostate due to the large shape variation and low signal-to-noise ratio. To successfully segment the whole prostate from 2D TRUS video sequences, this paper presents a new model based algorithm using both global population-based and adaptive local shape statistics to guide segmentation. By adaptively learning shape statistics in a local neighborhood during the segmentation process, the algorithm can effectively capture the patient-specific shape statistics and the large shape variations in the base and apex areas. After incorporating the learned shape statistics into a deformable model, the proposed method can accurately segment the entire gland of the prostate with significantly improved performance in the base and apex. The proposed method segments TRUS video in a fully automatic fashion. In our experiments, 19 video sequences with 3064 frames in total grabbed from 19 different patients for prostate cancer biopsy were used for validation. It took about 200ms for segmenting one frame on a Core2 1.86 GHz PC. The average mean absolute distance (MAD) error was 1.65±0.47mm for the proposed method, compared to 2.50±0.81mm and 2.01±0.63mm for independent frame segmentation and frame segmentation result propagation, respectively. Furthermore, the proposed method reduced the MAD errors by 49.4% and 18.9% in the base and by 55.6% and 17.7% in the apex, respectively
    corecore