85 research outputs found

    Multi-Surface Simplex Spine Segmentation for Spine Surgery Simulation and Planning

    Get PDF
    This research proposes to develop a knowledge-based multi-surface simplex deformable model for segmentation of healthy as well as pathological lumbar spine data. It aims to provide a more accurate and robust segmentation scheme for identification of intervertebral disc pathologies to assist with spine surgery planning. A robust technique that combines multi-surface and shape statistics-aware variants of the deformable simplex model is presented. Statistical shape variation within the dataset has been captured by application of principal component analysis and incorporated during the segmentation process to refine results. In the case where shape statistics hinder detection of the pathological region, user-assistance is allowed to disable the prior shape influence during deformation. Results have been validated against user-assisted expert segmentation

    New Image Processing Methods for Ultrasound Musculoskeletal Applications

    Get PDF
    In the past few years, ultrasound (US) imaging modalities have received increasing interest as diagnostic tools for orthopedic applications. The goal for many of these novel ultrasonic methods is to be able to create three-dimensional (3D) bone visualization non-invasively, safely and with high accuracy and spatial resolution. Availability of accurate bone segmentation and 3D reconstruction methods would help correctly interpreting complex bone morphology as well as facilitate quantitative analysis. However, in vivo ultrasound images of bones may have poor quality due to uncontrollable motion, high ultrasonic attenuation and the presence of imaging artifacts, which can affect the quality of the bone segmentation and reconstruction results. In this study, we investigate the use of novel ultrasonic processing methods that can significantly improve bone visualization, segmentation and 3D reconstruction in ultrasound volumetric data acquired in applications in vivo. Specifically, in this study, we investigate the use of new elastography-based, Doppler-based and statistical shape model-based methods that can be applied to ultrasound bone imaging applications with the overall major goal of obtaining fast yet accurate 3D bone reconstructions. This study is composed to three projects, which all have the potential to significantly contribute to this major goal. The first project deals with the fast and accurate implementation of correlation-based elastography and poroelastography techniques for real-time assessment of the mechanical properties of musculoskeletal tissues. The rationale behind this project is that, iii in the future, elastography-based features can be used to reduce false positives in ultrasonic bone segmentation methods based on the differences between the mechanical properties of soft tissues and the mechanical properties of hard tissues. In this study, a hybrid computation model is designed, implemented and tested to achieve real time performance without compromise in elastographic image quality . In the second project, a Power Doppler-based signal enhancement method is designed and tested with the intent of increasing the contrast between soft tissue and bone while suppressing the contrast between soft tissue and connective tissue, which is often a cause of false positives in ultrasonic bone segmentation problems. Both in-vitro and in-vivo experiments are performed to statistically analyze the performance of this method. In the third project, a statistical shape model based bone surface segmentation method is proposed and investigated. This method uses statistical models to determine if a curve detected in a segmented ultrasound image belongs to a bone surface or not. Both in-vitro and in-vivo experiments are performed to statistically analyze the performance of this method. I conclude this Dissertation with a discussion on possible future work in the field of ultrasound bone imaging and assessment

    Development of ultrasound to measure deformation of functional spinal units in cervical spine

    Full text link
    Neck pain is a pervasive problem in the general population, especially in those working in vibrating environments, e.g. military troops and truck drivers. Previous studies showed neck pain was strongly associated with the degeneration of intervertebral disc, which is commonly caused by repetitive loading in the work place. Currently, there is no existing method to measure the in-vivo displacement and loading condition of cervical spine on the site. Therefore, there is little knowledge about the alternation of cervical spine functionality and biomechanics in dynamic environments. In this thesis, a portable ultrasound system was explored as a tool to measure the vertebral motion and functional spinal unit deformation. It is hypothesized that the time sequences of ultrasound imaging signals can be used to characterize the deformation of cervical spine functional spinal units in response to applied displacements and loading. Specifically, a multi-frame tracking algorithm is developed to measure the dynamic movement of vertebrae, which is validated in ex-vivo models. The planar kinematics of the functional spinal units is derived from a dual ultrasound system, which applies two ultrasound systems to image C-spine anteriorly and posteriorly. The kinematics is reconstructed from the results of the multi-frame movement tracking algorithm and a method to co-register ultrasound vertebrae images to MRI scan. Using the dual ultrasound, it is shown that the dynamic deformation of functional spinal unit is affected by the biomechanics properties of intervertebral disc ex-vivo and different applied loading in activities in-vivo. It is concluded that ultrasound is capable of measuring functional spinal units motion, which allows rapid in-vivo evaluation of C-spine in dynamic environments where X-Ray, CT or MRI cannot be used.2020-02-20T00:00:00

    A biomechanical approach for real-time tracking of lung tumors during External Beam Radiation Therapy (EBRT)

    Get PDF
    Lung cancer is the most common cause of cancer related death in both men and women. Radiation therapy is widely used for lung cancer treatment. However, this method can be challenging due to respiratory motion. Motion modeling is a popular method for respiratory motion compensation, while biomechanics-based motion models are believed to be more robust and accurate as they are based on the physics of motion. In this study, we aim to develop a biomechanics-based lung tumor tracking algorithm which can be used during External Beam Radiation Therapy (EBRT). An accelerated lung biomechanical model can be used during EBRT only if its boundary conditions (BCs) are defined in a way that they can be updated in real-time. As such, we have developed a lung finite element (FE) model in conjunction with a Neural Networks (NNs) based method for predicting the BCs of the lung model from chest surface motion data. To develop the lung FE model for tumor motion prediction, thoracic 4D CT images of lung cancer patients were processed to capture the lung and diaphragm geometry, trans-pulmonary pressure, and diaphragm motion. Next, the chest surface motion was obtained through tracking the motion of the ribcage in 4D CT images. This was performed to simulate surface motion data that can be acquired using optical tracking systems. Finally, two feedforward NNs were developed, one for estimating the trans-pulmonary pressure and another for estimating the diaphragm motion from chest surface motion data. The algorithm development consists of four steps of: 1) Automatic segmentation of the lungs and diaphragm, 2) diaphragm motion modelling using Principal Component Analysis (PCA), 3) Developing the lung FE model, and 4) Using two NNs to estimate the trans-pulmonary pressure values and diaphragm motion from chest surface motion data. The results indicate that the Dice similarity coefficient between actual and simulated tumor volumes ranges from 0.76±0.04 to 0.91±0.01, which is favorable. As such, real-time lung tumor tracking during EBRT using the proposed algorithm is feasible. Hence, further clinical studies involving lung cancer patients to assess the algorithm performance are justified

    Ultrasound and Photoacoustic Techniques for Surgical Guidance Inside and Around the Spine

    Get PDF
    Technological advances in image-guidance have made a significant impact in surgical standards, allowing for safer and less invasive procedures. Ultrasound and photoacoustic imaging are promising options for surgical guidance given their real-time capabilities without the use of ionizing radiation. However, challenges to improve the feasibility of ultrasound- and photoacoustic-based surgical guidance persists in the presence of bone. In this thesis, we address four challenges surrounding the implementation of ultrasound- and photoacoustic-based surgical guidance in clinical scenarios inside and around the spine. First, we introduce a novel regularized implementation of short-lag spatial coherence (SLSC) beamforming, named locally-weighted short-lag spatial coherence (LW-SLSC). LW-SLSC improves the segmentation of bony structures in ultrasound images, thus reducing the hardware and software cost of registering pre and intra-operative volumes. Second, we describe a contour analysis framework to characterize and differentiate photoacoustic signals originating from cancellous and cortical bone, which is critical for a safety navigation of surgical tools through small bony cavities such as the pedicle. This analysis is also useful for localizing tool tips within the pedicle. Third, we developed a GPU approach to SLSC beamforming to improve the signal-to-noise ratio of photoacoustic targets using low laser energies, thus improving the performance of robotic visual servoing of tooltips and enabling miniaturization of laser systems in the operating room. Finally, we developed a novel acoustic-based atlas method to identify photoacoustic contrast agents and discriminate them from tissue using only two laser wavelengths. This approach significantly reduces acquisition times in comparison to conventional spectral unmixing techniques. These four contributions are beneficial for the transition of a combined ultrasound and photoacoustic-based image-guidance system towards more challenging scenarios of surgical navigation. Focusing on bone structures inside and surrounding the spine, the newly combined systems and techniques demonstrated herein feature robust, accurate, and real-time capabilities to register to preoperative images, localize surgical tool tips, and characterize biomarkers. These contributions strengthen the range of possibilities for spinous and transthoracic ultrasound and photoacoustic navigation, broaden the scope of this field, and shorten the road to clinical implementation in the operating room

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System

    Analyse automatique des images échographiques de la colonne vertébrale

    Get PDF
    Résumé L'échographie est une modalité d'imagerie médicale généralement utilisée pour visualiser les tissus mous dans différentes applications cliniques. Cette technique est très avantageuse étant donné son faible coût, sa portabilité et surtout l'absence de rayons ionisants pour former des images. Cependant, le contenu de ces images est complexe et peut être difficile à interpréter même pour un expert. De plus, afin d'obtenir des images échographiques exploitables, un positionnement adéquat de la sonde est nécessaire lors de l'acquisition. Ces deux inconvénients sont encore plus importants lors de l'acquisition d’images de structures osseuses. Ces structures réfléchissent entièrement les ondes ultrasonores, créant ainsi des surfaces très brillantes et des ombres acoustiques en dessous d'elles, rendant ainsi leur interprétation encore plus complexe. Dans le cas d’une vertèbre, la surface de l'apophyse épineuse est tellement petite que sa brillance est particulièrement dépendante de l'orientation et de la position de la sonde. D'autre part, la forme complexe de la vertèbre rend la frontière de son ombre acoustique plus difficile à définir. Cependant l'utilisation de l'échographie de la colonne vertébrale à la place de radiographies lors du suivi clinique de patients atteints de scoliose pourrait réduire le cumul de radiation. Plusieurs méthodes utilisant l'échographie ont été développées ces dernières années afin d’évaluer la scoliose ou de réajuster le corset pour des patients atteints de scoliose idiopathique adolescente (SIA). Ces méthodes requièrent des images de bonne qualité et une segmentation manuelle du contenu. Dans ce projet, nous proposons d’effectuer une analyse automatique des images échographiques vertébrales afin de comprendre le modèle de formation de ces images et de segmenter automatiquement les structures d’intérêt. Dans un premier temps, nous avons développé une méthode de segmentation automatique de l'apophyse épineuse et de l'ombre acoustique dans les images échographiques vertébrales afin d'aider l'utilisateur à interpréter ce type d'images. Cette méthode s'appuie, tout d'abord, sur l’extraction de différentes caractéristiques et leur validation afin de sélectionner l’ensemble le plus pertinent. Puis un classifieur est utilisé afin d'associer chaque pixel de l'image à une des trois régions suivantes : apophyse épineuse, ombre acoustique ou autres tissus. Finalement, une étape de régularisation est appliquée afin de prendre en compte les différentes propriétés des vertèbres. Nous avions une base de données contenant 181 images échographiques, mais nous n'en avons utilisé que 107, car seules celles-ci avaient une qualité acceptable. Un taux de classification de 84% pour l’apophyse épineuse et de 92% pour l’ombre acoustique ont été obtenus. De plus, le centroïde de l’apophyse épineuse segmentée se trouvait en moyenne à 0.38 mm de celui de la vérité terrain, provenant d’une segmentation manuelle validée par un radiologue. Nous avons aussi évalué la précision de la méthode proposée en comparant les régions segmentées automatiquement à celles délimitées manuellement et avons obtenu un coefficient de similarité DICE de 0.88 pour l’ombre acoustique et de 0.72 pour l’apophyse épineuse.----------Abstract Ultrasound (US) imaging is a medical imaging modality that is often used to visualize soft tissues in the human body in various clinical applications. This technique has several important advantages, in particular its low cost, portability, and the fact that it is radiation-free. However, the content of US images is rather complex and can be hard to interpret even for an expert. Furthermore, the quality of the content of US images will depend of the positioning of the probe during the acquisition. When measuring bone surfaces, these two disadvantages are accentuated. Indeed, the acoustic waves are entirely reflected by these hard structures, thereby creating bright surfaces with acoustic shadows below them, which make the interpretation of such images even more challenging. In the case of a vertebra, the surface of the spinous process is so small that its appearance in US images will strongly depend on the orientation and position of the probe. Moreover, it can be difficult to determine the boundary of the acoustic shadow created by the bone structure given the complicated shape of the vertebra. Nevertheless, in the clinical monitoring of scoliosis, using US images of the spine instead of X-rays could be very useful to reduce the cumulative radiation received by patients. In recent years, several methods using US imaging to evaluate scoliosis, or to adjust the brace in the treatment of adolescent idiopathic scoliosis (AIS), have been developed. These methods require good quality images and use manual segmentation of the image content. In this project, we propose a framework for the automatic analysis of US images of the spine (vertebrae) that utilizes an image formation model and an automatic segmentation of the regions of interest. First, we developed an automatic segmentation method to detect the spinous process and the acoustic shadow in the US images, aimed at helping the end user interpret the images. This method uses feature extraction and selection process in order to determine the most relevant set of features. The aim of the classification task is to discriminate three different regions: spinous process, acoustic shadow and other tissues. An LDA classifier is used to assign each image pixel to one of the three regions. Finally, we apply a regularization step which exploits several properties of vertebrae. Using a database of 107 US images, we obtained a classification rate of 84% for the spinous process and 92% for the acoustic shadow. In addition, the centroid of the automatically segmented spinous process was located 0.38 mm on average from that of the ground truth, as provided by a manual labelling that was validated by a radiologist. We also compared the automatically and manually segmented regions and obtained DICE similarity coefficients of 0.72 and 0.88 for the spinous process and acoustic shadow respectively

    AN AUTOMATED, DEEP LEARNING APPROACH TO SYSTEMATICALLY & SEQUENTIALLY DERIVE THREE-DIMENSIONAL KNEE KINEMATICS DIRECTLY FROM TWO-DIMENSIONAL FLUOROSCOPIC VIDEO

    Get PDF
    Total knee arthroplasty (TKA), also known as total knee replacement, is a surgical procedure to replace damaged parts of the knee joint with artificial components. It aims to relieve pain and improve knee function. TKA can improve knee kinematics and reduce pain, but it may also cause altered joint mechanics and complications. Proper patient selection, implant design, and surgical technique are important for successful outcomes. Kinematics analysis plays a vital role in TKA by evaluating knee joint movement and mechanics. It helps assess surgery success, guides implant and technique selection, informs implant design improvements, detects problems early, and improves patient outcomes. However, evaluating the kinematics of patients using conventional approaches presents significant challenges. The reliance on 3D CAD models limits applicability, as not all patients have access to such models. Moreover, the manual and time-consuming nature of the process makes it impractical for timely evaluations. Furthermore, the evaluation is confined to laboratory settings, limiting its feasibility in various locations. This study aims to address these limitations by introducing a new methodology for analyzing in vivo 3D kinematics using an automated deep learning approach. The proposed methodology involves several steps, starting with image segmentation of the femur and tibia using a robust deep learning approach. Subsequently, 3D reconstruction of the implants is performed, followed by automated registration. Finally, efficient knee kinematics modeling is conducted. The final kinematics results showed potential for reducing workload and increasing efficiency. The algorithms demonstrated high speed and accuracy, which could enable real-time TKA kinematics analysis in the operating room or clinical settings. Unlike previous studies that relied on sponsorships and limited patient samples, this algorithm allows the analysis of any patient, anywhere, and at any time, accommodating larger subject populations and complete fluoroscopic sequences. Although further improvements can be made, the study showcases the potential of machine learning to expand access to TKA analysis tools and advance biomedical engineering applications

    Simulation Approaches to X-ray C-Arm-based Interventions

    Get PDF
    Mobile C-Arm systems have enabled interventional spine procedures, such as facet joint injections, to be performed minimally-invasively under X-ray or fluoroscopy guidance. The downside to these procedures is the radiation exposure the patient and medical staff are subject to, which can vary greatly depending on the procedure as well as the skill and experience of the team. Standard training methods for these procedures involve the use of a physical C-Arm with real X-rays training on either cadavers or via an apprenticeship-based program. Many guidance systems have been proposed in the literature which aim to reduce the amount of radiation exposure intraoperatively by supplementing the X-ray images with digitally reconstructed radiographs (DRRs). These systems have shown promising results in the lab but have proven difficult to integrate into the clinical workflow due to costly equipment, safety protocols, and difficulties in maintaining patient registration. Another approach for reducing the amount of radiation exposure is by providing better hands-on training for C-Arm positioning through a pre-operative simulator. Such simulators have been proposed in the literature but still require access to a physical C-Arm or costly tracking equipment. With the goal of providing hands-on, accessible training for C-Arm positioning tasks, we have developed a miniature 3D-printed C-Arm simulator using accelerometer-based tracking. The system is comprised of a software application to interface with the accelerometers and provide a real-time DRR display based on the position of the C-Arm source. We conducted a user study, consisting of control and experimental groups, to evaluate the efficacy of the system as a training tool. The experimental group achieved significantly lower procedure time and higher positioning accuracy than the control group. The system was evaluated positively for its use in medical education via a 5-pt likert scale questionnaire. C-Arm positioning tasks are associated with a highly visual learning-based nature due to the spatial mapping required from 2D fluoroscopic image to 3D C-Arm and patient. Due to the limited physical interaction required, this task is well suited for training in Virtual Reality (VR), eliminating the need for a physical C-Arm. To this end, we extended the system presented in chapter 2 to an entirely virtual-based approach. We implemented the system as a 3DSlicer module and conducted a pilot study for preliminary evaluation. The reception was overall positive, with users expressing enthusiasm towards training in VR, but also highlighting limitations and potential areas of improvement of the system
    • …
    corecore