192 research outputs found

    Multi-modality cardiac image computing: a survey

    Get PDF
    Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future

    Dynamic Analysis of X-ray Angiography for Image-Guided Coronary Interventions

    Get PDF
    Percutaneous coronary intervention (PCI) is a minimally-invasive procedure for treating patients with coronary artery disease. PCI is typically performed with image guidance using X-ray angiograms (XA) in which coronary arter

    Optical flow-based vascular respiratory motion compensation

    Full text link
    This paper develops a new vascular respiratory motion compensation algorithm, Motion-Related Compensation (MRC), to conduct vascular respiratory motion compensation by extrapolating the correlation between invisible vascular and visible non-vascular. Robot-assisted vascular intervention can significantly reduce the radiation exposure of surgeons. In robot-assisted image-guided intervention, blood vessels are constantly moving/deforming due to respiration, and they are invisible in the X-ray images unless contrast agents are injected. The vascular respiratory motion compensation technique predicts 2D vascular roadmaps in live X-ray images. When blood vessels are visible after contrast agents injection, vascular respiratory motion compensation is conducted based on the sparse Lucas-Kanade feature tracker. An MRC model is trained to learn the correlation between vascular and non-vascular motions. During the intervention, the invisible blood vessels are predicted with visible tissues and the trained MRC model. Moreover, a Gaussian-based outlier filter is adopted for refinement. Experiments on in-vivo data sets show that the proposed method can yield vascular respiratory motion compensation in 0.032 sec, with an average error 1.086 mm. Our real-time and accurate vascular respiratory motion compensation approach contributes to modern vascular intervention and surgical robots.Comment: This manuscript has been accepted by IEEE Robotics and Automation Letter

    Fast catheter segmentation and tracking based on x-ray fluoroscopic and echocardiographic modalities for catheter-based cardiac minimally invasive interventions

    Get PDF
    X-ray fluoroscopy and echocardiography imaging (ultrasound, US) are two imaging modalities that are widely used in cardiac catheterization. For these modalities, a fast, accurate and stable algorithm for the detection and tracking of catheters is required to allow clinicians to observe the catheter location in real-time. Currently X-ray fluoroscopy is routinely used as the standard modality in catheter ablation interventions. However, it lacks the ability to visualize soft tissue and uses harmful radiation. US does not have these limitations but often contains acoustic artifacts and has a small field of view. These make the detection and tracking of the catheter in US very challenging. The first contribution in this thesis is a framework which combines Kalman filter and discrete optimization for multiple catheter segmentation and tracking in X-ray images. Kalman filter is used to identify the whole catheter from a single point detected on the catheter in the first frame of a sequence of x-ray images. An energy-based formulation is developed that can be used to track the catheters in the following frames. We also propose a discrete optimization for minimizing the energy function in each frame of the X-ray image sequence. Our approach is robust to tangential motion of the catheter and combines the tubular and salient feature measurements into a single robust and efficient framework. The second contribution is an algorithm for catheter extraction in 3D ultrasound images based on (a) the registration between the X-ray and ultrasound images and (b) the segmentation of the catheter in X-ray images. The search space for the catheter extraction in the ultrasound images is constrained to lie on or close to a curved surface in the ultrasound volume. The curved surface corresponds to the back-projection of the extracted catheter from the X-ray image to the ultrasound volume. Blob-like features are detected in the US images and organized in a graphical model. The extracted catheter is modelled as the optimal path in this graphical model. Both contributions allow the use of ultrasound imaging for the improved visualization of soft tissue. However, X-ray imaging is still required for each ultrasound frame and the amount of X-ray exposure has not been reduced. The final contribution in this thesis is a system that can track the catheter in ultrasound volumes automatically without the need for X-ray imaging during the tracking. Instead X-ray imaging is only required for the system initialization and for recovery from tracking failures. This allows a significant reduction in the amount of X-ray exposure for patient and clinicians.Open Acces

    Simulation of a new respiratory phase sorting method for 4D-imaging using optical surface information towards precision radiotherapy

    Get PDF
    Background: Respiratory signal detection is critical for 4-dimensional (4D) imaging. This study proposes and evaluates a novel phase sorting method using optical surface imaging (OSI), aiming to improve the precision of radiotherapy. Method: Based on 4D Extended Cardiac-Torso (XCAT) digital phantom, OSI in point cloud format was generated from the body segmentation, and image projections were simulated using the geometries of Varian 4D kV cone-beam-CT (CBCT). Respiratory signals were extracted respectively from the segmented diaphragm image (reference method) and OSI respectively, where Gaussian Mixture Model and Principal Component Analysis (PCA) were used for image registration and dimension reduction respectively. Breathing frequencies were compared using Fast-Fourier-Transform. Consistency of 4DCBCT images reconstructed using Maximum Likelihood Expectation Maximization algorithm was also evaluated quantitatively, where high consistency can be suggested by lower Root-Mean-Square-Error (RMSE), Structural-Similarity-Index (SSIM) value closer to 1, and larger Peak-Signal-To-Noise-Ratio (PSNR) respectively. Results: High consistency of breathing frequencies was observed between the diaphragm-based (0.232 Hz) and OSI-based (0.251 Hz) signals, with a slight discrepancy of 0.019Hz. Using end of expiration (EOE) and end of inspiration (EOI) phases as examples, the mean±1SD values of the 80 transverse, 100 coronal and 120 sagittal planes were 0.967, 0,972, 0.974 (SSIM); 1.657 ± 0.368, 1.464 ± 0.104, 1.479 ± 0.297 (RMSE); and 40.501 ± 1.737, 41.532 ± 1.464, 41.553 ± 1.910 (PSNR) for the EOE; and 0.969, 0.973, 0.973 (SSIM); 1.686 ± 0.278, 1.422 ± 0.089, 1.489 ± 0.238 (RMSE); and 40.535 ± 1.539, 41.605 ± 0.534, 41.401 ± 1.496 (PSNR) for EOI respectively. Conclusions: This work proposed and evaluated a novel respiratory phase sorting approach for 4D imaging using optical surface signals, which can potentially be applied to precision radiotherapy. Its potential advantages were non-ionizing, non-invasive, non-contact, and more compatible with various anatomic regions and treatment/imaging systems

    Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review

    Full text link
    Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Recently, artificial intelligence (AI) has demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review serves to present the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provide a literature summary on the topic. We will also discuss the limitations of these algorithms and propose potential improvements.Comment: 36 pages, 5 Figures, 4 Table

    Intelligent image-driven motion modelling for adaptive radiotherapy

    Get PDF
    Internal anatomical motion (e.g. respiration-induced motion) confounds the precise delivery of radiation to target volumes during external beam radiotherapy. Precision is, however, critical to ensure prescribed radiation doses are delivered to the target (tumour) while surrounding healthy tissues are preserved from damage. If the motion itself can be accurately estimated, the treatment plan and/or delivery can be adapted to compensate. Current methods for motion estimation rely either on invasive implanted fiducial markers, imperfect surrogate models based, for example, on external optical measurements or breathing traces, or expensive and rare systems like in-treatment MRI. These methods have limitations such as invasiveness, imperfect modelling, or high costs, underscoring the need for more efficient and accessible approaches to accurately estimate motion during radiation treatment. This research, in contrast, aims to achieve accurate motion prediction using only relatively low-quality, but almost universally available planar X-ray imaging. This is challenging since such images have poor soft tissue contrast and provide only 2D projections through the anatomy. However, our hypothesis suggests that, with strong priors in the form of learnt models for anatomical motion and image appearance, these images can provide sufficient information for accurate 3D motion reconstruction. We initially proposed an end-to-end graph neural network (GNN) architecture aimed at learning mesh regression using a patient-specific template organ geometry and deep features extracted from kV images at arbitrary projection angles. However, this approach proved to be more time-consuming during training. As an alternative, a second framework was proposed, based on a self-attention convolutional neural network (CNN) architecture. This model focuses on learning mappings between deep semantic angle-dependent X-ray image features and the corresponding encoded deformation latent representations of deformed point clouds of the patient's organ geometry. Both frameworks underwent quantitative testing on synthetic respiratory motion scenarios and qualitative assessment on in-treatment images obtained over a full scan series for liver cancer patients. For the first framework, the overall mean prediction errors on synthetic motion test datasets were 0.16±0.13 mm, 0.18±0.19 mm, 0.22±0.34 mm, and 0.12±0.11 mm, with mean peak prediction errors of 1.39 mm, 1.99 mm, 3.29 mm, and 1.16 mm. As for the second framework, the overall mean prediction errors on synthetic motion test datasets were 0.065±0.04 mm, 0.088±0.06 mm, 0.084±0.04 mm, and 0.059±0.04 mm, with mean peak prediction errors of 0.29 mm, 0.39 mm, 0.30 mm, and 0.25 mm
    corecore