46 research outputs found

    A novel flexible framework with automatic feature correspondence optimization for nonrigid registration in radiotherapy

    Get PDF
    Technical improvements in planning and dose delivery and in verification of patient positioning have substantially widened the therapeutic window for radiation treatment of cancer. However, changes in patient anatomy during the treatment limit the exploitation of these new techniques. To further improve radiation treatments, anatomical changes need to be modeled and accounted for. Non-rigid registration can be used for this purpose. This paper describes the design, the implementation and the validation of a new framework for non-rigid registration for radiotherapy applications. The core of this framework is an improved version of the Thin Plate Splines Robust Point Matching (TPS-RPM) algorithm. The TPS-RPM algorithm estimates a global correspondence and a transformation between the points that represent organs of interest belonging to two image sets. However, the algorithm does not allow for the inclusion of prior knowledge on the correspondence of subset of points and therefore, it can lead to inconsistent anatomical solutions. In this paper TPS-RPM was improved by employing a novel correspondence filter that supports simultaneous registration of multiple structures. The improved method allows for coherent organ registration and for the inclusion of user defined landmarks, lines and surfaces inside and outside of structures of interest. A procedure to generate control points form segmented organs is described. The framework parameters r and ?, which control the number of points and the non-rigidness of the transformation respectively, were optimized for three sites with different degrees of deformation: head and neck, prostate and cervix, using two cases per site. For the head and neck cases, the salivary glands were manually contoured on CT-scans, for the prostate cases the prostate and the vesicles, and for the cervix cases the cervix-uterus, the bladder and the rectum. The transformation error obtained using the best set of parameters was below 1 mm for all the studied cases. The length of the deformation vectors were on average (± 1 standard deviation) 5.8 ± 2.5 and 2.6 ± 1.1 mm for the head and neck cases, 7.2 ± 4.5 and 8.6 ± 1.9 mm for the prostate cases, and 19.0 ± 11.6 and 14.5 ± 9.3 mm for the cervix cases. Distinguishable anatomical features were identified for each case, and were used to validate the registration by calculating residual distances after transformation: 1.5 ± 0.8, 2.3 ± 1.0 and 6.3 ± 2.9 mm for the head and neck, prostate and cervix sites respectively. Finally, we demonstrated how the inclusion of these anatomical features in the registration process reduced the residual distances to 0.8 ± 0.5, 0.6 ± 0.5 and 1.3 ± 0.7 mm for the head and neck, prostate and cervix sites respectively. The inclusion of additional anatomical features produced more anatomically coherent transformations without compromising the transformation error. We concluded that the presented non-rigid registration framework is a powerful tool to simultaneously register multiple segmented organs with very different complexity

    Collaborative regression-based anatomical landmark detection

    No full text
    Anatomical landmark detection plays an important role in medical image analysis, e.g., for registration, segmentation and quantitative analysis. Among various existing methods for landmark detection, regression-based methods recently have drawn much attention due to robustness and efficiency. In such methods, landmarks are localized through voting from all image voxels, which is completely different from classification-based methods that use voxel-wise classification to detect landmarks. Despite robustness, the accuracy of regression-based landmark detection methods is often limited due to 1) inclusion of uninformative image voxels in the voting procedure, and 2) lack of effective ways to incorporate inter-landmark spatial dependency into the detection step. In this paper, we propose a collaborative landmark detection framework to address these limitations. The concept of collaboration is reflected in two aspects. 1) Multi-resolution collaboration. A multi-resolution strategy is proposed to hierarchically localize landmarks by gradually excluding uninformative votes from faraway voxels. Moreover, for the informative voxels near the landmark, a spherical sampling strategy is also designed in the training stage to improve their prediction accuracy. 2) Inter-landmark collaboration. A confidence-based landmark detection strategy is proposed to improve the detection accuracy of “difficult-to-detect” landmarks by using spatial guidance from “easy-to-detect” landmarks. To evaluate our method, we conducted experiments extensively on three datasets for detecting prostate landmarks and head & neck landmarks in computed tomography (CT) images, and also dental landmarks in cone beam computed tomography (CBCT) images. The results show the effectiveness of our collaborative landmark detection framework in improving landmark detection accuracy, compared to other state-of-the-art methods

    Incremental Learning with Selective Memory (ILSM): Towards Fast Prostate Localization for Image Guided Radiotherapy

    No full text
    Image-guided radiotherapy (IGRT) requires fast and accurate localization of the prostate in 3-D treatment-guided radiotherapy, which is challenging due to low tissue contrast and large anatomical variation across patients. On the other hand, the IGRT workflow involves collecting a series of computed tomography (CT) images from the same patient under treatment. These images contain valuable patient-specific information yet are often neglected by previous works. In this paper, we propose a novel learning framework, namely incremental learning with selective memory (ILSM), to effectively learn the patient-specific appearance characteristics from these patient-specific images. Specifically, starting with a population-based discriminative appearance model, ILSM aims to “personalize” the model to fit patient-specific appearance characteristics. The model is personalized with two steps: backward pruning that discards obsolete population-based knowledge and forward learning that incorporates patient-specific characteristics. By effectively combining the patient-specific characteristics with the general population statistics, the incrementally learned appearance model can localize the prostate of a specific patient much more accurately. This work has three contributions: 1) the proposed incremental learning framework can capture patient-specific characteristics more effectively, compared to traditional learning schemes, such as pure patient-specific learning, population-based learning, and mixture learning with patient-specific and population data; 2) this learning framework does not have any parametric model assumption, hence, allowing the adoption of any discriminative classifier; and 3) using ILSM, we can localize the prostate in treatment CTs accurately (DSC ∼0.89) and fast (∼4 s), which satisfies the real-world clinical requirements of IGRT
    corecore