18 research outputs found

    Inter-fractional Respiratory Motion Modelling from Abdominal Ultrasound: A Feasibility Study

    Get PDF
    Motion management strategies are crucial for radiotherapy of mobile tumours in order to ensure proper target coverage, save organs at risk and prevent interplay effects. We present a feasibility study for an inter-fractional, patient-specific motion model targeted at active beam scanning proton therapy. The model is designed to predict dense lung motion information from 2D abdominal ultrasound images. In a pretreatment phase, simultaneous ultrasound and magnetic resonance imaging are used to build a regression model. During dose delivery, abdominal ultrasound imaging serves as a surrogate for lung motion prediction. We investigated the performance of the motion model on five volunteer datasets. In two cases, the ultrasound probe was replaced after the volunteer has stood up between two imaging sessions. The overall mean prediction error is 2.9 mm and 3.4 mm after repositioning and therefore within a clinically acceptable range. These results suggest that the ultrasound-based regression model is a promising approach for inter-fractional motion management in radiotherapy

    Learning a Generative Motion Model from Image Sequences based on a Latent Motion Matrix

    Get PDF
    We propose to learn a probabilistic motion model from a sequence of images for spatio-temporal registration. Our model encodes motion in a low-dimensional probabilistic space - the motion matrix - which enables various motion analysis tasks such as simulation and interpolation of realistic motion patterns allowing for faster data acquisition and data augmentation. More precisely, the motion matrix allows to transport the recovered motion from one subject to another simulating for example a pathological motion in a healthy subject without the need for inter-subject registration. The method is based on a conditional latent variable model that is trained using amortized variational inference. This unsupervised generative model follows a novel multivariate Gaussian process prior and is applied within a temporal convolutional network which leads to a diffeomorphic motion model. Temporal consistency and generalizability is further improved by applying a temporal dropout training scheme. Applied to cardiac cine-MRI sequences, we show improved registration accuracy and spatio-temporally smoother deformations compared to three state-of-the-art registration algorithms. Besides, we demonstrate the model's applicability for motion analysis, simulation and super-resolution by an improved motion reconstruction from sequences with missing frames compared to linear and cubic interpolation.Comment: accepted at IEEE TM

    A motion model-guided 4D dose reconstruction for pencil beam scanned proton therapy.

    Get PDF
    Objective.4D dose reconstruction in proton therapy with pencil beam scanning (PBS) typically relies on a single pre-treatment 4DCT (p4DCT). However, breathing motion during the fractionated treatment can vary considerably in both amplitude and frequency. We present a novel 4D dose reconstruction method combining delivery log files with patient-specific motion models, to account for the dosimetric effect of intra- and inter-fractional breathing variability.Approach.Correlation between an external breathing surrogate and anatomical deformations of the p4DCT is established using principal component analysis. Using motion trajectories of a surface marker acquired during the dose delivery by an optical tracking system, deformable motion fields are retrospectively reconstructed and used to generate time-resolved synthetic 4DCTs ('5DCTs') by warping a reference CT. For three abdominal/thoracic patients, treated with respiratory gating and rescanning, example fraction doses were reconstructed using the resulting 5DCTs and delivery log files. The motion model was validated beforehand using leave-one-out cross-validation (LOOCV) with subsequent 4D dose evaluations. Moreover, besides fractional motion, fractional anatomical changes were incorporated as proof of concept.Main results.For motion model validation, the comparison of 4D dose distributions for the original 4DCT and predicted LOOCV resulted in 3%/3 mm gamma pass rates above 96.2%. Prospective gating simulations on the p4DCT can overestimate the target dose coverage V95%by up to 2.1% compared to 4D dose reconstruction based on observed surrogate trajectories. Nevertheless, for the studied clinical cases treated with respiratory-gating and rescanning, an acceptable target coverage was maintained with V95%remaining above 98.8% for all studied fractions. For these gated treatments, larger dosimetric differences occurred due to CT changes than due to breathing variations.Significance.To gain a better estimate of the delivered dose, a retrospective 4D dose reconstruction workflow based on motion data acquired during PBS proton treatments was implemented and validated, thus considering both intra- and inter-fractional motion and anatomy changes

    Holographic microscopy of complex fluids

    Full text link
    This thesis explores the application of in-line digital holographic microscopy (DHM) and deep learning for the non-invasive study of cells, micro-objects, and pollen lipids in complex environments. First, I will explore in-line DHM combined with model-based analysis as a potential tool for the study and characterization of pollenkitt— a natural bio-adhesive. The adhesive nature of pollenkitt allows it to attach to various floral and insect surfaces, hence favouring pollination. We use a model-based inference technique that relies on Lorenz-Mie scattering theory to non-invasively estimate minute changes in the refractive index of pollenkitt particles due to variations in local temperature and pollen ageing. Second, I show how advances in deep learning can be used to improve hologram analysis and expand the application of holography. Low signal-to-noise ratio (SNR) and 2π phase ambiguities often make it difficult to accurately estimate cellular properties from noisy holograms. I show that conditional generative adversarial networks (cGANs), a type of deep learning, can be used to successfully unwrap noisy wrapped phase maps and retrieve continuous phase values with high accuracy. Our method outperforms current gold standards for phase unwrapping and is robust to decreasing SNR. We applied this approach to noisy experimental holograms of human leukemia cells and simulated noisy holograms of test objects, successfully extracting crucial quantitative features. The most complex bottleneck in imaging is imaging through the scattering media. The acquired digital holograms of objects of interest can be adversely affected by undesired scattering, making downstream analysis impossible. I investigated the potential of cGANs to alleviate the impact of adverse scattering. We conducted light scattering simulations to show that cGANs can be trained with a small dataset and effectively establish a statistical mapping between the input and output images. This allows for quantitative feature extraction for studying colloidal Brownian dynamics, localising diffraction-limited impulses, and object retrieval. I also derived an analytical expression for the first and second-order speckle intensity autocorrelation function based on scaler diffraction theory. I found that despite randomisation, these autocorrelation functions are related to each other and do not depend on the physical properties of the scattering layer, forming the basis behind our ability to retrieve objects

    Artificial Intelligence and Deep Learning for Advancing PET Image Reconstruction: State-of-the-Art and Future Directions

    Get PDF
    Positron emission tomography (PET) is vital for diagnosing diseases and monitoring treatments. Conventional image reconstruction (IR) techniques like filtered backprojection and iterative algorithms are powerful but face limitations. PET IR can be seen as an image-to-image translation. Artificial intelligence (AI) and deep learning (DL) using multilayer neural networks enable a new approach to this computer vision task. This review aims to provide mutual understanding for nuclear medicine professionals and AI researchers. We outline fundamentals of PET imaging as well as state-of-the-art in AI-based PET IR with its typical algorithms and DL architectures. Advances improve resolution and contrast recovery, reduce noise, and remove artifacts via inferred attenuation and scatter correction, sinogram inpainting, denoising, and super-resolution refinement. Kernel-priors support list-mode reconstruction, motion correction, and parametric imaging. Hybrid approaches combine AI with conventional IR. Challenges of AI-assisted PET IR include availability of training data, cross-scanner compatibility, and the risk of hallucinated lesions. The need for rigorous evaluations, including quantitative phantom validation and visual comparison of diagnostic accuracy against conventional IR, is highlighted along with regulatory issues. First approved AI-based applications are clinically available, and its impact is foreseeable. Emerging trends, such as the integration of multimodal imaging and the use of data from previous imaging visits, highlight future potentials. Continued collaborative research promises significant improvements in image quality, quantitative accuracy, and diagnostic performance, ultimately leading to the integration of AI-based IR into routine PET imaging protocols

    Learning a Generative Motion Model from Image Sequences based on a Latent Motion Matrix

    Get PDF
    International audienceWe propose to learn a probabilistic motion model from a sequence of images for spatio-temporal registration. Our model encodes motion in a low-dimensional probabilistic spacethe motion matrix-which enables various motion analysis tasks such as simulation and interpolation of realistic motion patterns allowing for faster data acquisition and data augmentation. More precisely, the motion matrix allows to transport the recovered motion from one subject to another simulating for example a pathological motion in a healthy subject without the need for inter-subject registration. The method is based on a conditional latent variable model that is trained using amortized variational inference. This unsupervised generative model follows a novel multivariate Gaussian process prior and is applied within a temporal convolutional network which leads to a diffeomorphic motion model. Temporal consistency and generalizability is further improved by applying a temporal dropout training scheme. Applied to cardiac cine-MRI sequences, we show improved registration accuracy and spatio-temporally smoother deformations compared to three state-of-the-art registration algorithms. Besides, we demonstrate the model's applicability for motion analysis, simulation and super-resolution by an improved motion reconstruction from sequences with missing frames compared to linear and cubic interpolation

    Understanding and overcoming head motion in ultra-high field Magnetic Resonance Imaging with parallel radio-frequency transmission

    Get PDF
    Ultra-high field (UHF) magnetic resonance imaging (MRI) offers more signal-to-noise compared to most clinical systems, but clinical uptake of UHF MRI remains low, partly due to artificial signal contrast variations and a higher risk of undesirable tissue heating at UHF. Parallel RF transmission (pTx) is capable of overcoming both issues, but the implications of patient motion on signal and safety (i.e., specific absorption rate; SAR) when pTx is used are currently not well understood. The work in this thesis aims to better characterise these effects, and presents novel approaches to help overcome them. The study chapters present investigations into firstly, the effects of motion on signal quality when different RF pulse types are used (Chapter 5); secondly, the effects of pTx coil dimensions on motion-related SAR changes (Chapter 6); and thirdly, the inter-subject variability of motion-related SAR changes in pTx (Chapter 7). Following this, two methods are outlined which aim to reduce the sensitivity of signal to head motion in pTx. These comprise firstly, a method which uses composite B1+ maps for pTx pulse design (Chapter 8), and secondly, a deep learning framework which can estimate B1+ maps following head motion (Chapter 9). Finally, the generalisability of the latter approach across coil models is explored (Chapter 10). Simulated data were used for all of the work presented. Electromagnetic field distributions were generated using Sim4Life (Zurich MedTech, Zurich, Switzerland). RF pulse design and evaluation was conducted in MATLAB (The MathWorks Inc., Natick, MA). Findings indicate that systematic differences in the signal and SAR behaviour arise when different RF pulse types and different pTx coils are used, respectively. On the other hand, differences were observed in the SAR sensitivity to motion across different virtual body models, but these were not clearly systematic. These findings indicate that new approaches are needed in order to guarantee good image quality and safety for pTx in cases of subject motion. The two proposed methods reduced the impact of motion on simulated signal profiles

    Automated quality control by application of machine learning techniques for quantitative liver MRI

    Get PDF
    Quantitative magnetic resonance imaging (qMRI) and multi-parametric MRI are being increasingly used to diagnose and monitor liver diseases such as non-alcoholic fatty liver disease (NAFLD). These acquisitions are comparably more complicated than traditional T1-weighted and T2-weighted MRI scans and are also more prone to image quality is- sues and artefacts. In order for the output of the qMRI scans to be useable, they must undergo a rigorous and often lengthy quality control (QC). This manual QC is prone to human error and subjective. Additionally, with the development of new qMRI tech- niques, this leads to the manifestation of new quality issues. This thesis focuses on the development and implementation of automated QC processes for liver qMRI scans, that is where possible tag-free such that the process can be adapted to different imag- ing techniques. These automated QC processes were implemented using a variety of machine learning (ML) and deep learning (DL) approaches. These methods, developed on T1 mapping in UKBiobank, were designed to output metrics from the MRI scans that could be used to identify a specific quality issue, such as in chapter 3, or give a more general indication of the image quality in chapter 4. Furthermore, it was hypothe- sised that the introduction of associated meta-data, such as patient factors and scanning parameters, into these deep learning models would increase overall performance. This was explored in chapter 5. Finally, in order to assess the utility of our developed al- gorithms in a wider setting except for T1 mapping in UKBiobank, we tested it in two settings. Pilot study one assessed the utility of the model in T1 mapping in a separate study (CoverScan). Pilot study two assessed the utility of the model in a different qMRI acquisition; proton density fat fraction (PDFF) acquisitions from UKBiobank

    Intelligent image-driven motion modelling for adaptive radiotherapy

    Get PDF
    Internal anatomical motion (e.g. respiration-induced motion) confounds the precise delivery of radiation to target volumes during external beam radiotherapy. Precision is, however, critical to ensure prescribed radiation doses are delivered to the target (tumour) while surrounding healthy tissues are preserved from damage. If the motion itself can be accurately estimated, the treatment plan and/or delivery can be adapted to compensate. Current methods for motion estimation rely either on invasive implanted fiducial markers, imperfect surrogate models based, for example, on external optical measurements or breathing traces, or expensive and rare systems like in-treatment MRI. These methods have limitations such as invasiveness, imperfect modelling, or high costs, underscoring the need for more efficient and accessible approaches to accurately estimate motion during radiation treatment. This research, in contrast, aims to achieve accurate motion prediction using only relatively low-quality, but almost universally available planar X-ray imaging. This is challenging since such images have poor soft tissue contrast and provide only 2D projections through the anatomy. However, our hypothesis suggests that, with strong priors in the form of learnt models for anatomical motion and image appearance, these images can provide sufficient information for accurate 3D motion reconstruction. We initially proposed an end-to-end graph neural network (GNN) architecture aimed at learning mesh regression using a patient-specific template organ geometry and deep features extracted from kV images at arbitrary projection angles. However, this approach proved to be more time-consuming during training. As an alternative, a second framework was proposed, based on a self-attention convolutional neural network (CNN) architecture. This model focuses on learning mappings between deep semantic angle-dependent X-ray image features and the corresponding encoded deformation latent representations of deformed point clouds of the patient's organ geometry. Both frameworks underwent quantitative testing on synthetic respiratory motion scenarios and qualitative assessment on in-treatment images obtained over a full scan series for liver cancer patients. For the first framework, the overall mean prediction errors on synthetic motion test datasets were 0.16±0.13 mm, 0.18±0.19 mm, 0.22±0.34 mm, and 0.12±0.11 mm, with mean peak prediction errors of 1.39 mm, 1.99 mm, 3.29 mm, and 1.16 mm. As for the second framework, the overall mean prediction errors on synthetic motion test datasets were 0.065±0.04 mm, 0.088±0.06 mm, 0.084±0.04 mm, and 0.059±0.04 mm, with mean peak prediction errors of 0.29 mm, 0.39 mm, 0.30 mm, and 0.25 mm
    corecore