350 research outputs found

    Image-to-Graph Convolutional Network for 2D/3D Deformable Model Registration of Low-Contrast Organs

    Get PDF
    Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope, e.g., in image-guided radiotherapy and surgical guidance. We propose an image-to-graph convolutional network that achieves deformable registration of a three-dimensional (3D) organ mesh for a low-contrast two-dimensional (2D) projection image. This framework enables simultaneous training of two types of transformation: from the 2D projection image to a displacement map, and from the sampled per-vertex feature to a 3D displacement that satisfies the geometrical constraint of the mesh structure. Assuming application to radiation therapy, the 2D/3D deformable registration performance is verified for multiple abdominal organs that have not been targeted to date, i.e., the liver, stomach, duodenum, and kidney, and for pancreatic cancer. The experimental results show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from digitally reconstructed radiographs with clinically acceptable accuracy

    Real-time intrafraction motion monitoring in external beam radiotherapy

    Get PDF
    © 2019 Institute of Physics and Engineering in Medicine. Radiotherapy (RT) aims to deliver a spatially conformal dose of radiation to tumours while maximizing the dose sparing to healthy tissues. However, the internal patient anatomy is constantly moving due to respiratory, cardiac, gastrointestinal and urinary activity. The long term goal of the RT community to 'see what we treat, as we treat' and to act on this information instantaneously has resulted in rapid technological innovation. Specialized treatment machines, such as robotic or gimbal-steered linear accelerators (linac) with in-room imaging suites, have been developed specifically for real-time treatment adaptation. Additional equipment, such as stereoscopic kilovoltage (kV) imaging, ultrasound transducers and electromagnetic transponders, has been developed for intrafraction motion monitoring on conventional linacs. Magnetic resonance imaging (MRI) has been integrated with cobalt treatment units and more recently with linacs. In addition to hardware innovation, software development has played a substantial role in the development of motion monitoring methods based on respiratory motion surrogates and planar kV or Megavoltage (MV) imaging that is available on standard equipped linacs. In this paper, we review and compare the different intrafraction motion monitoring methods proposed in the literature and demonstrated in real-time on clinical data as well as their possible future developments. We then discuss general considerations on validation and quality assurance for clinical implementation. Besides photon RT, particle therapy is increasingly used to treat moving targets. However, transferring motion monitoring technologies from linacs to particle beam lines presents substantial challenges. Lessons learned from the implementation of real-time intrafraction monitoring for photon RT will be used as a basis to discuss the implementation of these methods for particle RT

    CNN-based real-time 2D-3D deformable registration from a single X-ray projection

    Full text link
    Purpose: The purpose of this paper is to present a method for real-time 2D-3D non-rigid registration using a single fluoroscopic image. Such a method can find applications in surgery, interventional radiology and radiotherapy. By estimating a three-dimensional displacement field from a 2D X-ray image, anatomical structures segmented in the preoperative scan can be projected onto the 2D image, thus providing a mixed reality view. Methods: A dataset composed of displacement fields and 2D projections of the anatomy is generated from the preoperative scan. From this dataset, a neural network is trained to recover the unknown 3D displacement field from a single projection image. Results: Our method is validated on lung 4D CT data at different stages of the lung deformation. The training is performed on a 3D CT using random (non domain-specific) diffeomorphic deformations, to which perturbations mimicking the pose uncertainty are added. The model achieves a mean TRE over a series of landmarks ranging from 2.3 to 5.5 mm depending on the amplitude of deformation. Conclusion: In this paper, a CNN-based method for real-time 2D-3D non-rigid registration is presented. This method is able to cope with pose estimation uncertainties, making it applicable to actual clinical scenarios, such as lung surgery, where the C-arm pose is planned before the intervention

    Artificial General Intelligence for Radiation Oncology

    Full text link
    The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale

    Medical physics challenges in clinical MR-guided radiotherapy

    Get PDF
    The integration of magnetic resonance imaging (MRI) for guidance in external beam radiotherapy has faced significant research and development efforts in recent years. The current availability of linear accelerators with an embedded MRI unit, providing volumetric imaging at excellent soft tissue contrast, is expected to provide novel possibilities in the implementation of image-guided adaptive radiotherapy (IGART) protocols. This study reviews open medical physics issues in MR-guided radiotherapy (MRgRT) implementation, with a focus on current approaches and on the potential for innovation in IGART.Daily imaging in MRgRT provides the ability to visualize the static anatomy, to capture internal tumor motion and to extract quantitative image features for treatment verification and monitoring. Those capabilities enable the use of treatment adaptation, with potential benefits in terms of personalized medicine. The use of online MRI requires dedicated efforts to perform accurate dose measurements and calculations, due to the presence of magnetic fields. Likewise, MRgRT requires dedicated quality assurance (QA) protocols for safe clinical implementation.Reaction to anatomical changes in MRgRT, as visualized on daily images, demands for treatment adaptation concepts, with stringent requirements in terms of fast and accurate validation before the treatment fraction can be delivered. This entails specific challenges in terms of treatment workflow optimization, QA, and verification of the expected delivered dose while the patient is in treatment position. Those challenges require specialized medical physics developments towards the aim of fully exploiting MRI capabilities. Conversely, the use of MRgRT allows for higher confidence in tumor targeting and organs-at-risk (OAR) sparing.The systematic use of MRgRT brings the possibility of leveraging IGART methods for the optimization of tumor targeting and quantitative treatment verification. Although several challenges exist, the intrinsic benefits of MRgRT will provide a deeper understanding of dose delivery effects on an individual basis, with the potential for further treatment personalization

    Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review

    Full text link
    Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Recently, artificial intelligence (AI) has demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review serves to present the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provide a literature summary on the topic. We will also discuss the limitations of these algorithms and propose potential improvements.Comment: 36 pages, 5 Figures, 4 Table

    Statistical deformation reconstruction using multi-organ shape features for pancreatic cancer localization

    Get PDF
    Respiratory motion and the associated deformations of abdominal organs and tumors are essential information in clinical applications. However, inter- and intra-patient multi-organ deformations are complex and have not been statistically formulated, whereas single organ deformations have been widely studied. In this paper, we introduce a multi-organ deformation library and its application to deformation reconstruction based on the shape features of multiple abdominal organs. Statistical multi-organ motion/deformation models of the stomach, liver, left and right kidneys, and duodenum were generated by shape matching their region labels defined on four-dimensional computed tomography images. A total of 250 volumes were measured from 25 pancreatic cancer patients. This paper also proposes a per-region-based deformation learning using the non-linear kernel model to predict the displacement of pancreatic cancer for adaptive radiotherapy. The experimental results show that the proposed concept estimates deformations better than general per-patient-based learning models and achieves a clinically acceptable estimation error with a mean distance of 1.2 ± 0.7 mm and a Hausdorff distance of 4.2 ± 2.3 mm throughout the respiratory motion
    corecore