20 research outputs found

    Intelligent image-driven motion modelling for adaptive radiotherapy

    Get PDF
    Internal anatomical motion (e.g. respiration-induced motion) confounds the precise delivery of radiation to target volumes during external beam radiotherapy. Precision is, however, critical to ensure prescribed radiation doses are delivered to the target (tumour) while surrounding healthy tissues are preserved from damage. If the motion itself can be accurately estimated, the treatment plan and/or delivery can be adapted to compensate. Current methods for motion estimation rely either on invasive implanted fiducial markers, imperfect surrogate models based, for example, on external optical measurements or breathing traces, or expensive and rare systems like in-treatment MRI. These methods have limitations such as invasiveness, imperfect modelling, or high costs, underscoring the need for more efficient and accessible approaches to accurately estimate motion during radiation treatment. This research, in contrast, aims to achieve accurate motion prediction using only relatively low-quality, but almost universally available planar X-ray imaging. This is challenging since such images have poor soft tissue contrast and provide only 2D projections through the anatomy. However, our hypothesis suggests that, with strong priors in the form of learnt models for anatomical motion and image appearance, these images can provide sufficient information for accurate 3D motion reconstruction. We initially proposed an end-to-end graph neural network (GNN) architecture aimed at learning mesh regression using a patient-specific template organ geometry and deep features extracted from kV images at arbitrary projection angles. However, this approach proved to be more time-consuming during training. As an alternative, a second framework was proposed, based on a self-attention convolutional neural network (CNN) architecture. This model focuses on learning mappings between deep semantic angle-dependent X-ray image features and the corresponding encoded deformation latent representations of deformed point clouds of the patient's organ geometry. Both frameworks underwent quantitative testing on synthetic respiratory motion scenarios and qualitative assessment on in-treatment images obtained over a full scan series for liver cancer patients. For the first framework, the overall mean prediction errors on synthetic motion test datasets were 0.16±0.13 mm, 0.18±0.19 mm, 0.22±0.34 mm, and 0.12±0.11 mm, with mean peak prediction errors of 1.39 mm, 1.99 mm, 3.29 mm, and 1.16 mm. As for the second framework, the overall mean prediction errors on synthetic motion test datasets were 0.065±0.04 mm, 0.088±0.06 mm, 0.084±0.04 mm, and 0.059±0.04 mm, with mean peak prediction errors of 0.29 mm, 0.39 mm, 0.30 mm, and 0.25 mm

    Applications of Deep Learning to Differential Equation Models in Oncology

    Get PDF
    The integration of quantitative tools in biology and medicine has led to many groundbreaking advances in recent history, with many more promising discoveries on the horizon. Conventional mathematical models, particularly differential equation-based models, have had great success in various biological applications, including modelling bacterial growth, disease propagation, and tumour spread. However, these approaches can be somewhat limited due to their reliance on known parameter values, initial conditions, and boundary conditions, which can dull their applicability. Furthermore, their forms are directly tied to mechanistic phenomena, making these models highly explainable, but also requiring a comprehensive understanding of the underlying dynamics before modelling the system. On the other hand, machine learning models typically require less prior knowledge of the system but require a significant amount of data for training. Although machine learning models can be more flexible, they tend to be black boxes, making them difficult to interpret. Hybrid models, which combine conventional and machine learning approaches, have the potential to achieve the best of both worlds. These models can provide explainable outcomes while relying on minimal assumptions or data. An example of this is physics-informed neural networks, a novel deep learning approach that incorporates information from partial differential equations into the optimization of a neural network. This hybrid approach offers significant potential in various contexts where differential equation models are known, but data is scarce or challenging to work with. Precision oncology is one such field. This thesis employs hybrid conventional/machine learning models to address problems in cancer medicine, specifically aiming to advance personalized medicine approaches. It contains three projects. In the first, a hybrid approach is used to make patient-specific characterizations of brain tumours using medical imaging data. In the second project, a hybrid approach is employed to create subject-specific projections of drug-carrying cancer nanoparticle accumulation and intratumoral interstitial fluid pressure. In the final project, a hybrid approach is utilized to optimize radiation therapy scheduling for tumours with heterogeneous cell populations and cancer stem cells. Overall, this thesis showcases several examples of how quantitative tools, particularly those involving both conventional and machine learning approaches, can be employed to tackle challenges in oncology. It further supports the notion that the continued integration of quantitative tools in medicine is a key strategy in addressing problems and open questions in healthcare

    Preface

    Get PDF

    Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries

    Get PDF
    This two-volume set LNCS 12962 and 12963 constitutes the thoroughly refereed proceedings of the 7th International MICCAI Brainlesion Workshop, BrainLes 2021, as well as the RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge, the Federated Tumor Segmentation (FeTS) Challenge, the Cross-Modality Domain Adaptation (CrossMoDA) Challenge, and the challenge on Quantification of Uncertainties in Biomedical Image Quantification (QUBIQ). These were held jointly at the 23rd Medical Image Computing for Computer Assisted Intervention Conference, MICCAI 2020, in September 2021. The 91 revised papers presented in these volumes were selected form 151 submissions. Due to COVID-19 pandemic the conference was held virtually. This is an open access book

    Book of abstracts

    Get PDF
    corecore