729 research outputs found

    Artificial Intelligence-based Motion Tracking in Cancer Radiotherapy: A Review

    Full text link
    Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Recently, artificial intelligence (AI) has demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review serves to present the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provide a literature summary on the topic. We will also discuss the limitations of these algorithms and propose potential improvements.Comment: 36 pages, 5 Figures, 4 Table

    Dynamic Analysis of X-ray Angiography for Image-Guided Coronary Interventions

    Get PDF
    Percutaneous coronary intervention (PCI) is a minimally-invasive procedure for treating patients with coronary artery disease. PCI is typically performed with image guidance using X-ray angiograms (XA) in which coronary arter

    3D shape instantiation for intra-operative navigation from a single 2D projection

    Get PDF
    Unlike traditional open surgery where surgeons can see the operation area clearly, in robot-assisted Minimally Invasive Surgery (MIS), a surgeon’s view of the region of interest is usually limited. Currently, 2D images from fluoroscopy, Magnetic Resonance Imaging (MRI), endoscopy or ultrasound are used for intra-operative guidance as real-time 3D volumetric acquisition is not always possible due to the acquisition speed or exposure constraints. 3D reconstruction, however, is key to navigation in complex in vivo geometries and can help resolve this issue. Novel 3D shape instantiation schemes are developed in this thesis, which can reconstruct the high-resolution 3D shape of a target from limited 2D views, especially a single 2D projection or slice. To achieve a complete and automatic 3D shape instantiation pipeline, segmentation schemes based on deep learning are also investigated. These include normalization schemes for training U-Nets and network architecture design of Atrous Convolutional Neural Networks (ACNNs). For U-Net normalization, four popular normalization methods are reviewed, then Instance-Layer Normalization (ILN) is proposed. It uses a sigmoid function to linearly weight the feature map after instance normalization and layer normalization, and cascades group normalization after the weighted feature map. Detailed validation results potentially demonstrate the practical advantages of the proposed ILN for effective and robust segmentation of different anatomies. For network architecture design in training Deep Convolutional Neural Networks (DCNNs), the newly proposed ACNN is compared to traditional U-Net where max-pooling and deconvolutional layers are essential. Only convolutional layers are used in the proposed ACNN with different atrous rates and it has been shown that the method is able to provide a fully-covered receptive field with a minimum number of atrous convolutional layers. ACNN enhances the robustness and generalizability of the analysis scheme by cascading multiple atrous blocks. Validation results have shown the proposed method achieves comparable results to the U-Net in terms of medical image segmentation, whilst reducing the trainable parameters, thus improving the convergence and real-time instantiation speed. For 3D shape instantiation of soft and deforming organs during MIS, Sparse Principle Component Analysis (SPCA) has been used to analyse a 3D Statistical Shape Model (SSM) and to determine the most informative scan plane. Synchronized 2D images are then scanned at the most informative scan plane and are expressed in a 2D SSM. Kernel Partial Least Square Regression (KPLSR) has been applied to learn the relationship between the 2D and 3D SSM. It has been shown that the KPLSR-learned model developed in this thesis is able to predict the intra-operative 3D target shape from a single 2D projection or slice, thus permitting real-time 3D navigation. Validation results have shown the intrinsic accuracy achieved and the potential clinical value of the technique. The proposed 3D shape instantiation scheme is further applied to intra-operative stent graft deployment for the robot-assisted treatment of aortic aneurysms. Mathematical modelling is first used to simulate the stent graft characteristics. This is then followed by the Robust Perspective-n-Point (RPnP) method to instantiate the 3D pose of fiducial markers of the graft. Here, Equally-weighted Focal U-Net is proposed with a cross-entropy and an additional focal loss function. Detailed validation has been performed on patient-specific stent grafts with an accuracy between 1-3mm. Finally, the relative merits and potential pitfalls of all the methods developed in this thesis are discussed, followed by potential future research directions and additional challenges that need to be tackled.Open Acces

    Automatic image analysis of C-arm Computed Tomography images for ankle joint surgeries

    Get PDF
    Open reduction and internal fixation is a standard procedure in ankle surgery for treating a fractured fibula. Since fibula fractures are often accompanied by an injury of the syndesmosis complex, it is essential to restore the correct relative pose of the fibula relative to the adjoining tibia for the ligaments to heal. Otherwise, the patient might experience instability of the ankle leading to arthritis and ankle pain and ultimately revision surgery. Incorrect positioning referred to as malreduction of the fibula is assumed to be one of the major causes of unsuccessful ankle surgery. 3D C-arm imaging is the current standard procedure for revealing malreduction of fractures in the operating room. However, intra-operative visual inspection of the reduction result is complicated due to high inter-individual variation of the ankle anatomy and rather based on the subjective experience of the surgeon. A contralateral side comparison with the patient’s uninjured ankle is recommended but has not been integrated into clinical routine due to the high level of radiation exposure it incurs. This thesis presents the first approach towards a computer-assisted intra-operative contralateral side comparison of the ankle joint. The focus of this thesis was the design, development and validation of a software-based prototype for a fully automatic intra-operative assistance system for orthopedic surgeons. The implementation does not require an additional 3D C-arm scan of the uninjured ankle, thus reducing time consumption and cumulative radiation dose. A 3D statistical shape model (SSM) is used to reconstruct a 3D surface model from three 2D fluoroscopic projections representing the uninjured ankle. To this end, a 3D SSM segmentation is performed on the 3D image of the injured ankle to gain prior knowledge of the ankle. A 3D convolutional neural network (CNN) based initialization method was developed and its outcome was incorporated into the SSM adaption step. Segmentation quality was shown to be improved in terms of accuracy and robustness compared to the pure intensity-based SSM. This allows us to overcome the limitations of the previously proposed methods, namely inaccuracy due to metal artifacts and the lack of device-to-patient orientation of the C-arm. A 2D-CNN is employed to extract semantic knowledge from all fluoroscopic projection images. This step of the pipeline both creates features for the subsequent reconstruction and also helps to pre-initialize the 3D-SSM without user interaction. A 2D-3D multi-bone reconstruction method has been developed which uses distance maps of the 2D features for fast and accurate correspondence optimization and SSM adaption. This is the central and most crucial component of the workflow. This is the first time that a bone reconstruction method has been applied to the complex ankle joint and the first reconstruction method using CNN based segmentations as features. The reconstructed 3D-SSM of the uninjured ankle can be back-projected and visualized in a workflow-oriented manner to procure clear visualization of the region of interest, which is essential for the evaluation of the reduction result. The surgeon can thus directly compare an overlay of the contralateral ankle with the injured ankle. The developed methods were evaluated individually using data sets acquired during a cadaver study and representative clinical data acquired during fibular reduction. A hierarchical evaluation was designed to assess the inaccuracies of the system on different levels and to identify major sources of error. The overall evaluation performed on eleven challenging clinical datasets acquired for manual contralateral side comparison showed that the system is capable of accurately reconstructing 3D surface models of the uninjured ankle solely using three projection images. A mean Hausdorff distance of 1.72 mm was measured when comparing the reconstruction result to the ground truth segmentation and almost achieved the high required clinical accuracy of 1-2 mm. The overall error of the pipeline was mainly attributed to inaccuracies in the 2D-CNN segmentation. The consistency of these results requires further validation on a larger dataset. The workflow proposed in this thesis establishes the first approach to enable automatic computer-assisted contralateral side comparison in ankle surgery. The feasibility of the proposed approach was proven on a limited amount of clinical cases and has already yielded good results. The next important step is to alleviate the identified bottlenecks in the approach by providing more training data in order to further improve the accuracy. In conclusion, the new approach presented gives the chance to guide the surgeon during the reduction process, improve the surgical outcome while avoiding additional radiation exposure and reduce the number of revision surgeries in the long term

    3D vessel reconstruction based on intra-operative intravascular ultrasound for robotic autonomous catheter navigation

    Get PDF
    In recent years, robotic technology has improved instrument navigation precision and accuracy, and helped decrease the complexity of minimally invasive surgery. Still, the inherent restricted access to the anatomy of the patients severely complicates many procedures. Interventionists frequently depend on external technologies for visual guidance, usually employing ionizing radiation, due to the limited view upon the surgical scene. In the case of endovascular procedures, fluoroscopy is the common imaging modality used for visualization. This modality is based on X-rays and only offers a two- dimensional (2D) view of the surgical scene. Having a real-time, up-to-date understanding of the surrounding environment of the surgical instruments within the vasculature and not depending on using ionizing radiation would not only be very helpful for interventionists, but also paramount for the navigation of an intraluminal robot. Therefore, the aim of this thesis is to develop an algorithm able to do an intra-operative and real-time three-dimensional (3D) vessel reconstruction. The algorithm is divided into two parts: the reconstruction and the merging. In the first one, it is obtained the 3D vessel reconstruction of a section of the vessel and in the second one, the different sections of 3D vessel reconstruction are combined. A real vessel mesh is used to calculate the fitting errors of the reconstructed vessel which are very smallEn los últimos años, la tecnología robótica ha mejorado la precisión y fiabilidad de la navegación de instrumentos y ha ayudado a disminuir la complejidad de la cirugía mínimamente invasiva. Aún así, el acceso restringido inherente a la anatomía de los pacientes complica gravemente muchos procedimientos. Los intervencionistas dependen con frecuencia de tecnologías externas para la guía visual, generalmente empleando radiación ionizante, debido a la visión limitada de la escena quirúrgica. En el caso de los procedimientos endovasculares, la fluoroscopia es la modalidad de imagen común utilizada para la visualización. Esta modalidad se basa en rayos X y solo ofrece una vista bidimensional (2D) de la escena quirúrgica. Poder saber en tiempo real y de forma actualizada como es el entorno alrededor de los instrumentos quirúrgicos que se encuentran dentro de la vasculatura y no depender del uso de radiación ionizante no solo sería muy útil para los intervencionistas, sino también fundamental para la navegación de un robot intraluminal. Por lo tanto, el objetivo de esta tesis es desarrollar un algoritmo capaz de realizar una reconstrucción tridimensional (3D) del vaso sanguíneo de forma intraoperatoria y en tiempo real. El algoritmo se divide en dos partes: la reconstrucción y la unión. En la primera se obtiene la reconstrucción 3D de una sección del vaso sanguíneo y en el segundo se combinan las diferentes secciones obtenidas de vasos sanguíneos reconstruidos en 3D. Se utiliza una malla de un vaso sanguíneo real para calcular los errores de ajuste del vaso sanguíneo reconstruido, son errores muy pequeñosEn els últims anys, la tecnologia robòtica ha millorat la precisió i la fiabilitat de la navegació dels instruments i ha ajudat a disminuir la complexitat de la cirurgia mínimament invasiva. Tot i així, l'accés restringit inherent a l'anatomia dels pacients complica greument molts procediments. Els intervencionistes sovint depenen de tecnologies externes per a la guia visual, normalment emprant radiacions ionitzants, a causa de la visió limitada de l'escena quirúrgica. En el cas dels procediments endovasculars, la fluoroscòpia és la modalitat d'imatge comuna utilitzada per a la visualització. Aquesta modalitat es basa en raigs X i només ofereix una visió bidimensional (2D) de l'escena quirúrgica. Poder saber en temps real i de forma actualitzada com és l'entorn al voltant dels instruments quirúrgics que es troben dins de la vasculatura i no depèn de l'ús de radiació ionitzant no només seria molt útil per als intervencionistes, sinó també fonamental per a la navegació d'un robot intraluminal. Per tant, l'objectiu d'aquesta tesi és desenvolupar un algorisme capaç de fer una reconstrucció tridimensional (3D) del vas sanguini de forma intraoperatòria i en temps real. L'algorisme es divideix en dues parts: la reconstrucció i la fusió. En la primera s'obté la reconstrucció en 3D d'una secció del vas sanguini i en la segona, es combinen les diferents seccions obtingudes de vasos sanguinis reconstruïts en 3D. S'utilitza una malla d’un vas sanguini real per calcular els errors d'ajust del vas sanguini reconstruït, els errors son molt petit

    Validation of the multi-segment foot model with bi-planar fluoroscopy

    Get PDF
    A multi-segment foot model (MSFM) is a useful tool for measuring foot joint kinematics although soft-tissue artefact is often present. Quantifying this error is needed to evaluate the accuracy of this model. This study validated the MSFM against bi-planar radiostereometric analysis (RSA) fluoroscopy. Heel-strike, mid-stance, and toe-off events during the stance phase were compared between motion capture and fluoroscopy. Rise/drop of the medial longitudinal arch showed a significant difference (p \u3c 0.05) during toe-off, but no significant difference during heel-strike or mid-stance. Hindfoot supination/pronation and internal/external rotation, and forefoot supination/pronation motions showed no significant difference between the two techniques. The lack of significant difference will allow the MSFM to be used as a sufficiently accurate technique for measuring foot joint motions

    3D Shape Reconstruction of Knee Bones from Low Radiation X-ray Images Using Deep Learning

    Get PDF
    Understanding the bone kinematics of the human knee during dynamic motions is necessary to evaluate the pathological conditions, design knee prosthesis, orthosis and surgical treatments such as knee arthroplasty. Also, knee bone kinematics is essential to assess the biofidelity of the computational models. Kinematics of the human knee has been reported in the literature either using in vitro or in vivo methodologies. In vivo methodology is widely preferred due to biomechanical accuracies. However, it is challenging to obtain the kinematic data in vivo due to limitations in existing methods. One of the several existing methods used in such application is using X-ray fluoroscopy imaging, which allows for the non-invasive quantification of bone kinematics. In the fluoroscopy imaging method, due to procedural simplicity and low radiation exposure, single-plane fluoroscopy (SF) is the preferred tool to study the in vivo kinematics of the knee joint. Evaluation of the three-dimensional (3D) kinematics from the SF imagery is possible only if prior knowledge of the shape of the knee bones is available. The standard technique for acquiring the knee shape is to either segment Magnetic Resonance (MR) images, which is expensive to procure, or Computed Tomography (CT) images, which exposes the subjects to a heavy dose of ionizing radiation. Additionally, both the segmentation procedures are time-consuming and labour-intensive. An alternative technique that is rarely used is to reconstruct the knee shape from the SF images. It is less expensive than MR imaging, exposes the subjects to relatively lower radiation than CT imaging, and since the kinematic study and the shape reconstruction could be carried out using the same device, it could save a considerable amount of time for the researchers and the subjects. However, due to low exposure levels, SF images are often characterized by a low signal-to-noise ratio, making it difficult to extract the required information to reconstruct the shape accurately. In comparison to conventional X-ray images, SF images are of lower quality and have less detail. Additionally, existing methods for reconstructing the shape of the knee remain generally inconvenient since they need a highly controlled system: images must be captured from a calibrated device, care must be taken while positioning the subject's knee in the X-ray field to ensure image consistency, and user intervention and expert knowledge is required for 3D reconstruction. In an attempt to simplify the existing process, this thesis proposes a new methodology to reconstruct the 3D shape of the knee bones from multiple uncalibrated SF images using deep learning. During the image acquisition using the SF, the subjects in this approach can freely rotate their leg (in a fully extended, knee-locked position), resulting in several images captured in arbitrary poses. Relevant features are extracted from these images using a novel feature extraction technique before feeding it to a custom-built Convolutional Neural Network (CNN). The network, without further optimization, directly outputs a meshed 3D surface model of the subject's knee joint. The whole procedure could be completed in a few minutes. The robust feature extraction technique can effectively extract relevant information from a range of image qualities. When tested on eight unseen sets of SF images with known true geometry, the network reconstructed knee shape models with a shape error (RMSE) of 1.91± 0.30 mm for the femur, 2.3± 0.36 mm for the tibia and 3.3± 0.53 mm for the patella. The error was calculated after rigidly aligning (scale, rotation, and translation) each of the reconstructed shape models with the corresponding known true geometry (obtained through MRI segmentation). Based on a previous study that examined the influence of reconstructed shape accuracy on the precision of the evaluation of tibiofemoral kinematics, the shape accuracy of the proposed methodology might be adequate to precisely track the bone kinematics, although further investigation is required

    ADVANCED MOTION MODELS FOR RIGID AND DEFORMABLE REGISTRATION IN IMAGE-GUIDED INTERVENTIONS

    Get PDF
    Image-guided surgery (IGS) has been a major area of interest in recent decades that continues to transform surgical interventions and enable safer, less invasive procedures. In the preoperative contexts, diagnostic imaging, including computed tomography (CT) and magnetic resonance (MR) imaging, offers a basis for surgical planning (e.g., definition of target, adjacent anatomy, and the surgical path or trajectory to the target). At the intraoperative stage, such preoperative images and the associated planning information are registered to intraoperative coordinates via a navigation system to enable visualization of (tracked) instrumentation relative to preoperative images. A major limitation to such an approach is that motions during surgery, either rigid motions of bones manipulated during orthopaedic surgery or brain soft-tissue deformation in neurosurgery, are not captured, diminishing the accuracy of navigation systems. This dissertation seeks to use intraoperative images (e.g., x-ray fluoroscopy and cone-beam CT) to provide more up-to-date anatomical context that properly reflects the state of the patient during interventions to improve the performance of IGS. Advanced motion models for inter-modality image registration are developed to improve the accuracy of both preoperative planning and intraoperative guidance for applications in orthopaedic pelvic trauma surgery and minimally invasive intracranial neurosurgery. Image registration algorithms are developed with increasing complexity of motion that can be accommodated (single-body rigid, multi-body rigid, and deformable) and increasing complexity of registration models (statistical models, physics-based models, and deep learning-based models). For orthopaedic pelvic trauma surgery, the dissertation includes work encompassing: (i) a series of statistical models to model shape and pose variations of one or more pelvic bones and an atlas of trajectory annotations; (ii) frameworks for automatic segmentation via registration of the statistical models to preoperative CT and planning of fixation trajectories and dislocation / fracture reduction; and (iii) 3D-2D guidance using intraoperative fluoroscopy. For intracranial neurosurgery, the dissertation includes three inter-modality deformable registrations using physic-based Demons and deep learning models for CT-guided and CBCT-guided procedures

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    IGRT and motion management during lung SBRT delivery.

    Get PDF
    Patient motion can cause misalignment of the tumour and toxicities to the healthy lung tissue during lung stereotactic body radiation therapy (SBRT). Any deviations from the reference setup can miss the target and have acute toxic effects on the patient with consequences onto its quality of life and survival outcomes. Correction for motion, either immediately prior to treatment or intra-treatment, can be realized with image-guided radiation therapy (IGRT) and motion management devices. The use of these techniques has demonstrated the feasibility of integrating complex technology with clinical linear accelerator to provide a higher standard of care for the patients and increase their quality of life
    corecore