713 research outputs found

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Multi-Modality Breast MRI Segmentation Using nn-UNet for Preoperative Planning of Robotic Surgery Navigation

    Get PDF
    Segmentation of the chest region and breast tissues is essential for surgery planning and navigation. This paper proposes the foundation for preoperative segmentation based on two cascaded architectures of deep neural networks (DNN) based on the state-of-the-art nnU-Net. Additionally, this study introduces a polyvinyl alcohol cryogel (PVA-C) breast phantom based on the segmentation of the DNN automated approach, enabling the experiments of navigation system for robotic breast surgery. Multi-modality breast MRI datasets of T2W and STIR images were acquired from 10 patients. Segmentation evaluation utilized the Dice Similarity Coefficient (DSC), segmentation accuracy, sensitivity, and specificity. First, a single class labeling was used to segment the breast region. Then it was employed as an input for three-class labeling to segment fat, fibroglandular (FGT) tissues, and tumorous lesions. The first architecture has a 0.95 DCS, while the second has a 0.95, 0.83, and 0.41 for fat, FGT, and tumor classes, respectively

    Registration accuracy of the optical navigation system for image-guided surgery

    Get PDF
    Abstract. During the last decades, image-guided surgery has been a vastly growing method during medical operations. It provides a new opportunity to perform surgical operations with higher accuracy and reliability than before. In image-guided surgery, a navigation system is used to track the instrument’s location and orientation during the surgery. These navigation systems can track the instrument in many ways, the most common of which are optical tracking, mechanical tracking, and electromagnetic tracking. Usually, the navigation systems are used primarily in surgical operations located in the head and spine area. For this reason, it is essential to know the registration accuracy and thus the navigational accuracy of the navigation system, and how different registration methods might affect them. In this research, the registration accuracy of the optical navigation system is investigated by using a head phantom whose coordinate values of holes in the surface are measured during the navigation after different registration scenarios. Reference points are determined using computed tomography images taken from the head phantom. The absolute differences of the measured points to the corresponding reference points are calculated and the results are illustrated using bar graphs and three-dimensional point clouds. MATLAB is used to analyze and present the results. Results show that registration accuracy and thus also navigation accuracy are primarily affected by how the first three registration points are determined for the navigation system at the beginning of the registration. This should be considered in future applications where the navigation system is used in image-guided surgery.Kuvaohjatun kirurgian optisen navigointilaitteen rekisteröintitarkkuus. Tiivistelmä. Viimeisten vuosikymmenien aikana kuvaohjattu kirurgia on yleistynyt laajalti lääketieteellisten toimenpiteiden aikana ja se tarjoaa entistä paremman mahdollisuuden tarkkaan ja luotettavaan hoitoon. Kuvaohjatussa kirurgiassa navigointilaitteisto seuraa käytetyn instrumentin paikkaa ja orientaatiota operaation aikana. Navigointilaitteistoilla on erilaisia toimintaperiaatteita, joiden perusteella ne seuraavat instrumenttia. Yleisimmin käytetyt navigointilaitteistot perustuvat optiseen, mekaaniseen, tai sähkömagneettiseen seurantaan. Yleensä kuvaohjattua kirurgiaa käytetään pään ja selkärangan alueen kirurgisissa operaatioissa, joten on erittäin tärkeää, että navigointilaitteiston rekisteröinti- ja siten myös navigointitarkkuus tunnetaan, sekä erilaisten rekisteröintitapojen mahdolliset vaikutukset kyseisiin tarkkuuksiin. Tässä tutkimuksessa optisen navigointilaitteen rekisteröintitarkkuutta tutkitaan päämallin avulla, jonka pintaan luotujen reikien koordinaattiarvot mitataan navigointitilanteessa erilaisten rekisteröintitapojen jälkeen. Referenssipisteet kyseisille mittauspisteille määritetään päämallin tietokonetomografiakuvista. Mitattujen pisteiden, sekä vastaavien referenssipisteiden väliset absoluuttiset erot lasketaan ja tulokset esitetään palkkikuvaajien, sekä kolmiulotteisten pistepilvien avulla käyttäen apuna MATLAB-ohjelmistoa. Tulokset osoittavat, että rekisteröintitarkkuuteen ja siten navigointitarkkuuteen vaikuttaa eniten rekisteröintitilanteen alussa määritettävien kolmen ensimmäisen rekisteröintipisteen sijainti ja tämä tuleekin ottaa huomioon jatkossa tilanteissa, joissa navigointilaitetta käytetään kuvaohjatussa kirurgiassa

    Image-guided liver surgery: intraoperative projection of computed tomography images utilizing tracked ultrasound

    Get PDF
    AbstractBackgroundUltrasound (US) is the most commonly used form of image guidance during liver surgery. However, the use of navigation systems that incorporate instrument tracking and three-dimensional visualization of preoperative tomography is increasing. This report describes an initial experience using an image-guidance system with navigated US.MethodsAn image-guidance system was used in a total of 50 open liver procedures to aid in localization and targeting of liver lesions. An optical tracking system was employed to localize surgical instruments. Customized hardware and calibration of the US transducer were required. The results of three procedures are highlighted in order to illustrate specific navigation techniques that proved useful in the broader patient cohort.ResultsOver a 7-month span, the navigation system assisted in completing 21 (42%) of the procedures, and tracked US alone provided additional information required to perform resection or ablation in six procedures (12%). Average registration time during the three illustrative procedures was <1min. Average set-up time was approximately 5min per procedure.ConclusionsThe Explorer™ Liver guidance system represents novel technology that continues to evolve. This initial experience indicates that image guidance is valuable in certain procedures, specifically in cases in which difficult anatomy or tumour location or echogenicity limit the usefulness of traditional guidance methods

    Neurosurgical Ultrasound Pose Estimation Using Image-Based Registration and Sensor Fusion - A Feasibility Study

    Get PDF
    Modern neurosurgical procedures often rely on computer-assisted real-time guidance using multiple medical imaging modalities. State-of-the-art commercial products enable the fusion of pre-operative with intra-operative images (e.g., magnetic resonance [MR] with ultrasound [US] images), as well as the on-screen visualization of procedures in progress. In so doing, US images can be employed as a template to which pre-operative images can be registered, to correct for anatomical changes, to provide live-image feedback, and consequently to improve confidence when making resection margin decisions near eloquent regions during tumour surgery. In spite of the potential for tracked ultrasound to improve many neurosurgical procedures, it is not widely used. State-of-the-art systems are handicapped by optical tracking’s need for consistent line-of-sight, keeping tracked rigid bodies clean and rigidly fixed, and requiring a calibration workflow. The goal of this work is to improve the value offered by co-registered ultrasound images without the workflow drawbacks of conventional systems. The novel work in this thesis includes: the exploration and development of a GPU-enabled 2D-3D multi-modal registration algorithm based on the existing LC2 metric; and the use of this registration algorithm in the context of a sensor and image-fusion algorithm. The work presented here is a motivating step in a vision towards a heterogeneous tracking framework for image-guided interventions where the knowledge from intraoperative imaging, pre-operative imaging, and (potentially disjoint) wireless sensors in the surgical field are seamlessly integrated for the benefit of the surgeon. The technology described in this thesis, inspired by advances in robot localization demonstrate how inaccurate pose data from disjoint sources can produce a localization system greater than the sum of its parts

    Image-guidance in endoscopic pituitary surgery: an in-silico study of errors involved in tracker-based techniques

    Get PDF
    Background: Endoscopic endonasal surgery is an established minimally invasive technique for resecting pituitary adenomas. However, understanding orientation and identifying critical neurovascular structures in this anatomically dense region can be challenging. In clinical practice, commercial navigation systems use a tracked pointer for guidance. Augmented Reality (AR) is an emerging technology used for surgical guidance. It can be tracker based or vision based, but neither is widely used in pituitary surgery. Methods: This pre-clinical study aims to assess the accuracy of tracker-based navigation systems, including those that allow for AR. Two setups were used to conduct simulations: (1) the standard pointer setup, tracked by an infrared camera; and (2) the endoscope setup that allows for AR, using reflective markers on the end of the endoscope, tracked by infrared cameras. The error sources were estimated by calculating the Euclidean distance between a point’s true location and the point’s location after passing it through the noisy system. A phantom study was then conducted to verify the in-silico simulation results and show a working example of image-based navigation errors in current methodologies. Results: The errors of the tracked pointer and tracked endoscope simulations were 1.7 and 2.5 mm respectively. The phantom study showed errors of 2.14 and 3.21 mm for the tracked pointer and tracked endoscope setups respectively. Discussion: In pituitary surgery, precise neighboring structure identification is crucial for success. However, our simulations reveal that the errors of tracked approaches were too large to meet the fine error margins required for pituitary surgery. In order to achieve the required accuracy, we would need much more accurate tracking, better calibration and improved registration techniques

    Optimization of craniosynostosis surgery: virtual planning, intraoperative 3D photography and surgical navigation

    Get PDF
    Mención Internacional en el título de doctorCraniosynostosis is a congenital defect defined as the premature fusion of one or more cranial sutures. This fusion leads to growth restriction and deformation of the cranium, caused by compensatory expansion parallel to the fused sutures. Surgical correction is the preferred treatment in most cases to excise the fused sutures and to normalize cranial shape. Although multiple technological advancements have arisen in the surgical management of craniosynostosis, interventional planning and surgical correction are still highly dependent on the subjective assessment and artistic judgment of craniofacial surgeons. Therefore, there is a high variability in individual surgeon performance and, thus, in the surgical outcomes. The main objective of this thesis was to explore different approaches to improve the surgical management of craniosynostosis by reducing subjectivity in all stages of the process, from the preoperative virtual planning phase to the intraoperative performance. First, we developed a novel framework for automatic planning of craniosynostosis surgery that enables: calculating a patient-specific normative reference shape to target, estimating optimal bone fragments for remodeling, and computing the most appropriate configuration of fragments in order to achieve the desired target cranial shape. Our results showed that automatic plans were accurate and achieved adequate overcorrection with respect to normative morphology. Surgeons’ feedback indicated that the integration of this technology could increase the accuracy and reduce the duration of the preoperative planning phase. Second, we validated the use of hand-held 3D photography for intraoperative evaluation of the surgical outcome. The accuracy of this technology for 3D modeling and morphology quantification was evaluated using computed tomography imaging as gold-standard. Our results demonstrated that 3D photography could be used to perform accurate 3D reconstructions of the anatomy during surgical interventions and to measure morphological metrics to provide feedback to the surgical team. This technology presents a valuable alternative to computed tomography imaging and can be easily integrated into the current surgical workflow to assist during the intervention. Also, we developed an intraoperative navigation system to provide real-time guidance during craniosynostosis surgeries. This system, based on optical tracking, enables to record the positions of remodeled bone fragments and compare them with the target virtual surgical plan. Our navigation system is based on patient-specific surgical guides, which fit into the patient’s anatomy, to perform patient-to-image registration. In addition, our workflow does not rely on patient’s head immobilization or invasive attachment of dynamic reference frames. After testing our system in five craniosynostosis surgeries, our results demonstrated a high navigation accuracy and optimal surgical outcomes in all cases. Furthermore, the use of navigation did not substantially increase the operative time. Finally, we investigated the use of augmented reality technology as an alternative to navigation for surgical guidance in craniosynostosis surgery. We developed an augmented reality application to visualize the virtual surgical plan overlaid on the surgical field, indicating the predefined osteotomy locations and target bone fragment positions. Our results demonstrated that augmented reality provides sub-millimetric accuracy when guiding both osteotomy and remodeling phases during open cranial vault remodeling. Surgeons’ feedback indicated that this technology could be integrated into the current surgical workflow for the treatment of craniosynostosis. To conclude, in this thesis we evaluated multiple technological advancements to improve the surgical management of craniosynostosis. The integration of these developments into the surgical workflow of craniosynostosis will positively impact the surgical outcomes, increase the efficiency of surgical interventions, and reduce the variability between surgeons and institutions.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidente: Norberto Antonio Malpica González.- Secretario: María Arrate Muñoz Barrutia.- Vocal: Tamas Ung

    Navigated Ultrasound in Laparoscopic Surgery

    Get PDF

    Exploiting Temporal Image Information in Minimally Invasive Surgery

    Get PDF
    Minimally invasive procedures rely on medical imaging instead of the surgeons direct vision. While preoperative images can be used for surgical planning and navigation, once the surgeon arrives at the target site real-time intraoperative imaging is needed. However, acquiring and interpreting these images can be challenging and much of the rich temporal information present in these images is not visible. The goal of this thesis is to improve image guidance for minimally invasive surgery in two main areas. First, by showing how high-quality ultrasound video can be obtained by integrating an ultrasound transducer directly into delivery devices for beating heart valve surgery. Secondly, by extracting hidden temporal information through video processing methods to help the surgeon localize important anatomical structures. Prototypes of delivery tools, with integrated ultrasound imaging, were developed for both transcatheter aortic valve implantation and mitral valve repair. These tools provided an on-site view that shows the tool-tissue interactions during valve repair. Additionally, augmented reality environments were used to add more anatomical context that aids in navigation and in interpreting the on-site video. Other procedures can be improved by extracting hidden temporal information from the intraoperative video. In ultrasound guided epidural injections, dural pulsation provides a cue in finding a clear trajectory to the epidural space. By processing the video using extended Kalman filtering, subtle pulsations were automatically detected and visualized in real-time. A statistical framework for analyzing periodicity was developed based on dynamic linear modelling. In addition to detecting dural pulsation in lumbar spine ultrasound, this approach was used to image tissue perfusion in natural video and generate ventilation maps from free-breathing magnetic resonance imaging. A second statistical method, based on spectral analysis of pixel intensity values, allowed blood flow to be detected directly from high-frequency B-mode ultrasound video. Finally, pulsatile cues in endoscopic video were enhanced through Eulerian video magnification to help localize critical vasculature. This approach shows particular promise in identifying the basilar artery in endoscopic third ventriculostomy and the prostatic artery in nerve-sparing prostatectomy. A real-time implementation was developed which processed full-resolution stereoscopic video on the da Vinci Surgical System
    corecore