96 research outputs found

    Mixed-reality visualization environments to facilitate ultrasound-guided vascular access

    Get PDF
    Ultrasound-guided needle insertions at the site of the internal jugular vein (IJV) are routinely performed to access the central venous system. Ultrasound-guided insertions maintain high rates of carotid artery puncture, as clinicians rely on 2D information to perform a 3D procedure. The limitations of 2D ultrasound-guidance motivated the research question: “Do 3D ultrasound-based environments improve IJV needle insertion accuracy”. We addressed this by developing advanced surgical navigation systems based on tracked surgical tools and ultrasound with various visualizations. The point-to-line ultrasound calibration enables the use of tracked ultrasound. We automated the fiducial localization required for this calibration method such that fiducials can be automatically localized within 0.25 mm of the manual equivalent. The point-to-line calibration obtained with both manual and automatic localizations produced average normalized distance errors less than 1.5 mm from point targets. Another calibration method was developed that registers an optical tracking system and the VIVE Pro head-mounted display (HMD) tracking system with sub-millimetre and sub-degree accuracy compared to ground truth values. This co-calibration enabled the development of an HMD needle navigation system, in which the calibrated ultrasound image and tracked models of the needle, needle trajectory, and probe were visualized in the HMD. In a phantom experiment, 31 clinicians had a 96 % success rate using the HMD system compared to 70 % for the ultrasound-only approach (p= 0.018). We developed a machine-learning-based vascular reconstruction pipeline that automatically returns accurate 3D reconstructions of the carotid artery and IJV given sequential tracked ultrasound images. This reconstruction pipeline was used to develop a surgical navigation system, where tracked models of the needle, needle trajectory, and the 3D z-buffered vasculature from a phantom were visualized in a common coordinate system on a screen. This system improved the insertion accuracy and resulted in 100 % success rates compared to 70 % under ultrasound-guidance (p=0.041) across 20 clinicians during the phantom experiment. Overall, accurate calibrations and machine learning algorithms enable the development of advanced 3D ultrasound systems for needle navigation, both in an immersive first-person perspective and on a screen, illustrating that 3D US environments outperformed 2D ultrasound-guidance used clinically

    Technologies for Biomechanically-Informed Image Guidance of Laparoscopic Liver Surgery

    Get PDF
    Laparoscopic surgery for liver resection has a number medical advantages over open surgery, but also comes with inherent technical challenges. The surgeon only has a very limited field of view through the imaging modalities routinely employed intra-operatively, laparoscopic video and ultrasound, and the pneumoperitoneum required to create the operating space and gaining access to the organ can significantly deform and displace the liver from its pre-operative configuration. This can make relating what is visible intra-operatively to the pre-operative plan and inferring the location of sub-surface anatomy a very challenging task. Image guidance systems can help overcome these challenges by updating the pre-operative plan to the situation in theatre and visualising it in relation to the position of surgical instruments. In this thesis, I present a series of contributions to a biomechanically-informed image-guidance system made during my PhD. The most recent one is work on a pipeline for the estimation of the post-insufflation configuration of the liver by means of an algorithm that uses a database of segmented training images of patient abdomens where the post-insufflation configuration of the liver is known. The pipeline comprises an algorithm for inter and intra-subject registration of liver meshes by means of non-rigid spectral point-correspondence finding. My other contributions are more fundamental and less application specific, and are all contained and made available to the public in the NiftySim open-source finite element modelling package. Two of my contributions to NiftySim are of particular interest with regards to image guidance of laparoscopic liver surgery: 1) a novel general purpose contact modelling algorithm that can be used to simulate contact interactions between, e.g., the liver and surrounding anatomy; 2) membrane and shell elements that can be used to, e.g., simulate the Glisson capsule that has been shown to significantly influence the organ’s measured stiffness

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning

    Get PDF
    Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and direct surgical procedures, and to track the development of bone-related diseases. This often involves radiologists who have to annotate bones manually or in a semi-automatic way, which is a time consuming task. Their annotation workload can be reduced by automated segmentation and detection of individual bones. This automation of distinct bone segmentation not only has the potential to accelerate current workflows but also opens up new possibilities for processing and presenting medical data for planning, navigation, and education. In this thesis, we explored the use of deep learning for automating the segmentation of all individual bones within an upper-body CT scan. To do so, we had to find a network architec- ture that provides a good trade-off between the problem’s high computational demands and the results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out to eliminate the most prevalent types of error. To do so, we introduced an novel method called binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin- guishing bone from non-bone is conducted separately from identifying the individual bones. Both predictions are then merged, which leads to superior results. Another type of error is tack- led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input into the network while keeping the growth of additional pixels in check. Overall, we present a deep-learning-based method that reliably segments most of the over one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter quickly enough to be used in interactive software. Our algorithm has been included in our groups virtual reality medical image visualisation software SpectoVR with the plan to be used as one of the puzzle piece in surgical planning and navigation, as well as in the education of future doctors

    Interfaces for Modular Surgical Planning and Assistance Systems

    Get PDF
    Modern surgery of the 21st century relies in many aspects on computers or, in a wider sense, digital data processing. Department administration, OR scheduling, billing, and - with increasing pervasion - patient data management are performed with the aid of so called Surgical Information Systems (SIS) or, more general, Hospital Information Systems (HIS). Computer Assisted Surgery (CAS) summarizes techniques which assist a surgeon in the preparation and conduction of surgical interventions. Today still predominantly based on radiology images, these techniques include the preoperative determination of an optimal surgical strategy and intraoperative systems which aim at increasing the accuracy of surgical manipulations. CAS is a relatively young field of computer science. One of the unsolved "teething troubles" of CAS is the absence of technical standards for the interconnectivity of CAS system. Current CAS systems are usually "islands of information" with no connection to other devices within the operating room or hospital-wide information systems. Several workshop reports and individual publications point out that this situation leads to ergonomic, logistic, and economic limitations in hospital work. Perioperative processes are prolonged by the manual installation and configuration of an increasing amount of technical devices. Intraoperatively, a large amount of the surgeons'' attention is absorbed by the requirement to monitor and operate systems. The need for open infrastructures which enable the integration of CAS devices from different vendors in order to exchange information as well as commands among these devices through a network has been identified by numerous experts with backgrounds in medicine as well as engineering. This thesis contains two approaches to the integration of CAS systems: - For perioperative data exchange, the specification of new data structures as an amendment to the existing DICOM standard for radiology image management is presented. The extension of DICOM towards surgical application allows for the seamless integration of surgical planning and reporting systems into DICOM-based Picture Archiving and Communication Systems (PACS) as they are installed in most hospitals for the exchange and long-term archival of patient images and image-related patient data. - For the integration of intraoperatively used CAS devices, such as, e.g., navigation systems, video image sources, or biosensors, the concept of a surgical middleware is presented. A c++ class library, the TiCoLi, is presented which facilitates the configuration of ad-hoc networks among the modules of a distributed CAS system as well as the exchange of data streams, singular data objects, and commands between these modules. The TiCoLi is the first software library for a surgical field of application to implement all of these services. To demonstrate the suitability of the presented specifications and their implementation, two modular CAS applications are presented which utilize the proposed DICOM extensions for perioperative exchange of surgical planning data as well as the TiCoLi for establishing an intraoperative network of autonomous, yet not independent, CAS modules.Die moderne Hochleistungschirurgie des 21. Jahrhunderts ist auf vielerlei Weise abhängig von Computern oder, im weiteren Sinne, der digitalen Datenverarbeitung. Administrative Abläufe, wie die Erstellung von Nutzungsplänen für die verfügbaren technischen, räumlichen und personellen Ressourcen, die Rechnungsstellung und - in zunehmendem Maße - die Verwaltung und Archivierung von Patientendaten werden mit Hilfe von digitalen Informationssystemen rationell und effizient durchgeführt. Innerhalb der Krankenhausinformationssysteme (KIS, oder englisch HIS) stehen für die speziellen Bedürfnisse der einzelnen Fachabteilungen oft spezifische Informationssysteme zur Verfügung. Chirurgieinformationssysteme (CIS, oder englisch SIS) decken hierbei vor allen Dingen die Bereiche Operationsplanung sowie Materialwirtschaft für spezifisch chirurgische Verbrauchsmaterialien ab. Während die genannten HIS und SIS vornehmlich der Optimierung administrativer Aufgaben dienen, stehen die Systeme der Computerassistierten Chirugie (CAS) wesentlich direkter im Dienste der eigentlichen chirugischen Behandlungsplanung und Therapie. Die CAS verwendet Methoden der Robotik, digitalen Bild- und Signalverarbeitung, künstlichen Intelligenz, numerischen Simulation, um nur einige zu nennen, zur patientenspezifischen Behandlungsplanung und zur intraoperativen Unterstützung des OP-Teams, allen voran des Chirurgen. Vor allen Dingen Fortschritte in der räumlichen Verfolgung von Werkzeugen und Patienten ("Tracking"), die Verfügbarkeit dreidimensionaler radiologischer Aufnahmen (CT, MRT, ...) und der Einsatz verschiedener Robotersysteme haben in den vergangenen Jahrzehnten den Einzug des Computers in den Operationssaal - medienwirksam - ermöglicht. Weniger prominent, jedoch keinesfalls von untergeordnetem praktischen Nutzen, sind Beispiele zur automatisierten Überwachung klinischer Messwerte, wie etwa Blutdruck oder Sauerstoffsättigung. Im Gegensatz zu den meist hochgradig verteilten und gut miteinander verwobenen Informationssystemen für die Krankenhausadministration und Patientendatenverwaltung, sind die Systeme der CAS heutzutage meist wenig oder überhaupt nicht miteinander und mit Hintergrundsdatenspeichern vernetzt. Eine Reihe wissenschaftlicher Publikationen und interdisziplinärer Workshops hat sich in den vergangen ein bis zwei Jahrzehnten mit den Problemen des Alltagseinsatzes von CAS Systemen befasst. Mit steigender Intensität wurde hierbei auf den Mangel an infrastrukturiellen Grundlagen für die Vernetzung intraoperativ eingesetzter CAS Systeme miteinander und mit den perioperativ eingesetzten Planungs-, Dokumentations- und Archivierungssystemen hingewiesen. Die sich daraus ergebenden negativen Einflüsse auf die Effizienz perioperativer Abläufe - jedes Gerät muss manuell in Betrieb genommen und mit den spezifischen Daten des nächsten Patienten gefüttert werden - sowie die zunehmende Aufmerksamkeit, welche der Operateur und sein Team auf die Überwachung und dem Betrieb der einzelnen Geräte verwenden muss, werden als eine der "Kinderkrankheiten" dieser relativ jungen Technologie betrachtet und stehen einer Verbreitung über die Grenzen einer engagierten technophilen Nutzergruppe hinaus im Wege. Die vorliegende Arbeit zeigt zwei parallel von einander (jedoch, im Sinne der Schnittstellenkompatibilität, nicht gänzlich unabhängig voneinander) zu betreibende Ansätze zur Integration von CAS Systemen. - Für den perioperativen Datenaustausch wird die Spezifikation zusätzlicher Datenstrukturen zum Transfer chirurgischer Planungsdaten im Rahmen des in radiologischen Bildverarbeitungssystemen weit verbreiteten DICOM Standards vorgeschlagen und an zwei Beispielen vorgeführt. Die Erweiterung des DICOM Standards für den perioperativen Einsatz ermöglicht hierbei die nahtlose Integration chirurgischer Planungssysteme in existierende "Picture Archiving and Communication Systems" (PACS), welche in den meisten Fällen auf dem DICOM Standard basieren oder zumindest damit kompatibel sind. Dadurch ist einerseits der Tatsache Rechnung getragen, dass die patientenspezifische OP-Planung in hohem Masse auf radiologischen Bildern basiert und andererseits sicher gestellt, dass die Planungsergebnisse entsprechend der geltenden Bestimmungen langfristig archiviert und gegen unbefugten Zugriff geschützt sind - PACS Server liefern hier bereits wohlerprobte Lösungen. - Für die integration intraoperativer CAS Systeme, wie etwa Navigationssysteme, Videobildquellen oder Sensoren zur Überwachung der Vitalparameter, wird das Konzept einer "chirurgischen Middleware" vorgestellt. Unter dem Namen TiCoLi wurde eine c++ Klassenbibliothek entwickelt, auf deren Grundlage die Konfiguration von ad-hoc Netzwerken während der OP-Vorbereitung mittels plug-and-play Mechanismen erleichtert wird. Nach erfolgter Konfiguration ermöglicht die TiCoLi den Austausch kontinuierlicher Datenströme sowie einzelner Datenpakete und Kommandos zwischen den Modulen einer verteilten CAS Anwendung durch ein Ethernet-basiertes Netzwerk. Die TiCoLi ist die erste frei verfügbare Klassenbibliothek welche diese Funktionalitäten dediziert für einen Einsatz im chirurgischen Umfeld vereinigt. Zum Nachweis der Tauglichkeit der gezeigten Spezifikationen und deren Implementierungen, werden zwei modulare CAS Anwendungen präsentiert, welche die vorgeschlagenen DICOM Erweiterungen zum perioperativen Austausch von Planungsergebnissen sowie die TiCoLi zum intraoperativen Datenaustausch von Messdaten unter echzeitnahen Anforderungen verwenden

    IMPROVING DAILY CLINICAL PRACTICE WITH ABDOMINAL PATIENT SPECIFIC 3D MODELS

    Get PDF
    This thesis proposes methods and procedures to proficiently introduce patient 3D models in the daily clinical practice for diagnosis and treatment of abdominal diseases. The objective of the work consists in providing and visualizing quantitative geometrical and topological information on the anatomy of interest, and to develop systems that allow to improve radiology and surgery. The 3D visualization drastically simplifies the interpretation process of medical images and provides benefits both in diagnosing and in surgical planning phases. Further advantages can be introduced registering virtual pre-operative information (3D models) with real intra-operative information (patient and surgical instruments). The surgeon can use mixed-reality systems that allow him/her to see covered structures before reaching them, surgical navigators for see the scene (anatomy and instruments) from different point of view and smart mechatronics devices, which, knowing the anatomy, assist him/her in an active way. All these aspects are useful in terms of safety, efficiency and financial resources for the physicians, for the patient and for the sanitary system too. The entire process, from volumetric radiological images acquisition up to the use of 3D anatomical models inside the surgical room, has been studied and specific applications have been developed. A segmentation procedure has been designed taking into account acquisition protocols commonly used in radiological departments, and a software tool, that allows to obtain efficient 3D models, have been implemented and tested. The alignment problem has been investigated examining the various sources of errors during the image acquisition, in the radiological department, and during to the execution of the intervention. A rigid body registration procedure compatible with the surgical environment has been defined and implemented. The procedure has been integrated in a surgical navigation system and is useful as starting initial registration for more accurate alignment methods based on deformable approaches. Monoscopic and stereoscopic 3D localization machine vision routines, using the laparoscopic and/or generic cameras images, have been implemented to obtain intra-operative information that can be used to model abdominal deformations. Further, the use of this information for fusion and registration purposes allows to enhance the potentialities of computer assisted surgery. In particular a precise alignment between virtual and real anatomies for mixed-reality purposes, and the development of tracker-free navigation systems, has been obtained elaborating video images and providing an analytical adaptation of the virtual camera to the real camera. Clinical tests, demonstrating the usability of the proposed solutions, are reported. Test results and appreciation of radiologists and surgeons, to the proposed prototypes, encourage their integration in the daily clinical practice and future developments

    On-the-fly dense 3D surface reconstruction for geometry-aware augmented reality.

    Get PDF
    Augmented Reality (AR) is an emerging technology that makes seamless connections between virtual space and the real world by superimposing computer-generated information onto the real-world environment. AR can provide additional information in a more intuitive and natural way than any other information-delivery method that a human has ever in- vented. Camera tracking is the enabling technology for AR and has been well studied for the last few decades. Apart from the tracking problems, sensing and perception of the surrounding environment are also very important and challenging problems. Although there are existing hardware solutions such as Microsoft Kinect and HoloLens that can sense and build the environmental structure, they are either too bulky or too expensive for AR. In this thesis, the challenging real-time dense 3D surface reconstruction technologies are studied and reformulated for the reinvention of basic position-aware AR towards geometry-aware and the outlook of context- aware AR. We initially propose to reconstruct the dense environmental surface using the sparse point from Simultaneous Localisation and Map- ping (SLAM), but this approach is prone to fail in challenging Minimally Invasive Surgery (MIS) scenes such as the presence of deformation and surgical smoke. We subsequently adopt stereo vision with SLAM for more accurate and robust results. With the success of deep learning technology in recent years, we present learning based single image re- construction and achieve the state-of-the-art results. Moreover, we pro- posed context-aware AR, one step further from purely geometry-aware AR towards the high-level conceptual interaction modelling in complex AR environment for enhanced user experience. Finally, a learning-based smoke removal method is proposed to ensure an accurate and robust reconstruction under extreme conditions such as the presence of surgical smoke

    Coronary Artery Segmentation and Motion Modelling

    No full text
    Conventional coronary artery bypass surgery requires invasive sternotomy and the use of a cardiopulmonary bypass, which leads to long recovery period and has high infectious potential. Totally endoscopic coronary artery bypass (TECAB) surgery based on image guided robotic surgical approaches have been developed to allow the clinicians to conduct the bypass surgery off-pump with only three pin holes incisions in the chest cavity, through which two robotic arms and one stereo endoscopic camera are inserted. However, the restricted field of view of the stereo endoscopic images leads to possible vessel misidentification and coronary artery mis-localization. This results in 20-30% conversion rates from TECAB surgery to the conventional approach. We have constructed patient-specific 3D + time coronary artery and left ventricle motion models from preoperative 4D Computed Tomography Angiography (CTA) scans. Through temporally and spatially aligning this model with the intraoperative endoscopic views of the patient's beating heart, this work assists the surgeon to identify and locate the correct coronaries during the TECAB precedures. Thus this work has the prospect of reducing the conversion rate from TECAB to conventional coronary bypass procedures. This thesis mainly focus on designing segmentation and motion tracking methods of the coronary arteries in order to build pre-operative patient-specific motion models. Various vessel centreline extraction and lumen segmentation algorithms are presented, including intensity based approaches, geometric model matching method and morphology-based method. A probabilistic atlas of the coronary arteries is formed from a group of subjects to facilitate the vascular segmentation and registration procedures. Non-rigid registration framework based on a free-form deformation model and multi-level multi-channel large deformation diffeomorphic metric mapping are proposed to track the coronary motion. The methods are applied to 4D CTA images acquired from various groups of patients and quantitatively evaluated

    Intracardiac Ultrasound Guided Systems for Transcatheter Cardiac Interventions

    Get PDF
    Transcatheter cardiac interventions are characterized by their percutaneous nature, increased patient safety, and low hospitalization times. Transcatheter procedures involve two major stages: navigation towards the target site and the positioning of tools to deliver the therapy, during which the interventionalists face the challenge of visualizing the anatomy and the relative position of the tools such as a guidewire. Fluoroscopic and transesophageal ultrasound (TEE) imaging are the most used techniques in cardiac procedures; however, they possess the disadvantage of radiation exposure and suboptimal imaging. This work explores the potential of intracardiac ultrasound (ICE) within an image guidance system (IGS) to facilitate the two stages of cardiac interventions. First, a novel 2.5D side-firing, conical Foresight ICE probe (Conavi Medical Inc., Toronto) is characterized, calibrated, and tracked using an electromagnetic sensor. The results indicate an acceptable tracking accuracy within some limitations. Next, an IGS is developed for navigating the vessels without fluoroscopy. A forward-looking, tracked ICE probe is used to reconstruct the vessel on a phantom which mimics the ultrasound imaging of an animal vena cava. Deep learning methods are employed to segment the complex vessel geometry from ICE imaging for the first time. The ICE-reconstructed vessel showed a clinically acceptable range of accuracy. Finally, a guidance system was developed to facilitate the positioning of tools during a tricuspid valve repair. The designed system potentially facilitates the positioning of the TriClip at the coaptation gap by pre-mapping the corresponding site of regurgitation in 3D tracking space
    • …
    corecore