2,251 research outputs found

    Three-dimensional ultrasound image-guided robotic system for accurate microwave coagulation of malignant liver tumours

    Full text link
    Background The further application of conventional ultrasound (US) image-guided microwave (MW) ablation of liver cancer is often limited by two-dimensional (2D) imaging, inaccurate needle placement and the resulting skill requirement. The three-dimensional (3D) image-guided robotic-assisted system provides an appealing alternative option, enabling the physician to perform consistent, accurate therapy with improved treatment effectiveness. Methods Our robotic system is constructed by integrating an imaging module, a needle-driven robot, a MW thermal field simulation module, and surgical navigation software in a practical and user-friendly manner. The robot executes precise needle placement based on the 3D model reconstructed from freehand-tracked 2D B-scans. A qualitative slice guidance method for fine registration is introduced to reduce the placement error caused by target motion. By incorporating the 3D MW specific absorption rate (SAR) model into the heat transfer equation, the MW thermal field simulation module determines the MW power level and the coagulation time for improved ablation therapy. Two types of wrists are developed for the robot: a ‘remote centre of motion’ (RCM) wrist and a non-RCM wrist, which is preferred in real applications. Results The needle placement accuracies were < 3 mm for both wrists in the mechanical phantom experiment. The target accuracy for the robot with the RCM wrist was improved to 1.6 ± 1.0 mm when real-time 2D US feedback was used in the artificial-tissue phantom experiment. By using the slice guidance method, the robot with the non-RCM wrist achieved accuracy of 1.8 ± 0.9 mm in the ex vivo experiment; even target motion was introduced. In the thermal field experiment, a 5.6% relative mean error was observed between the experimental coagulated neurosis volume and the simulation result. Conclusion The proposed robotic system holds promise to enhance the clinical performance of percutaneous MW ablation of malignant liver tumours. Copyright © 2010 John Wiley & Sons, Ltd.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78054/1/313_ftp.pd

    Development of a Surgical Assistance System for Guiding Transcatheter Aortic Valve Implantation

    Get PDF
    Development of image-guided interventional systems is growing up rapidly in the recent years. These new systems become an essential part of the modern minimally invasive surgical procedures, especially for the cardiac surgery. Transcatheter aortic valve implantation (TAVI) is a recently developed surgical technique to treat severe aortic valve stenosis in elderly and high-risk patients. The placement of stented aortic valve prosthesis is crucial and typically performed under live 2D fluoroscopy guidance. To assist the placement of the prosthesis during the surgical procedure, a new fluoroscopy-based TAVI assistance system has been developed. The developed assistance system integrates a 3D geometrical aortic mesh model and anatomical valve landmarks with live 2D fluoroscopic images. The 3D aortic mesh model and landmarks are reconstructed from interventional angiographic and fluoroscopic C-arm CT system, and a target area of valve implantation is automatically estimated using these aortic mesh models. Based on template-based tracking approach, the overlay of visualized 3D aortic mesh model, landmarks and target area of implantation onto fluoroscopic images is updated by approximating the aortic root motion from a pigtail catheter motion without contrast agent. A rigid intensity-based registration method is also used to track continuously the aortic root motion in the presence of contrast agent. Moreover, the aortic valve prosthesis is tracked in fluoroscopic images to guide the surgeon to perform the appropriate placement of prosthesis into the estimated target area of implantation. An interactive graphical user interface for the surgeon is developed to initialize the system algorithms, control the visualization view of the guidance results, and correct manually overlay errors if needed. Retrospective experiments were carried out on several patient datasets from the clinical routine of the TAVI in a hybrid operating room. The maximum displacement errors were small for both the dynamic overlay of aortic mesh models and tracking the prosthesis, and within the clinically accepted ranges. High success rates of the developed assistance system were obtained for all tested patient datasets. The results show that the developed surgical assistance system provides a helpful tool for the surgeon by automatically defining the desired placement position of the prosthesis during the surgical procedure of the TAVI.Die Entwicklung bildgeführter interventioneller Systeme wächst rasant in den letzten Jahren. Diese neuen Systeme werden zunehmend ein wesentlicher Bestandteil der technischen Ausstattung bei modernen minimal-invasiven chirurgischen Eingriffen. Diese Entwicklung gilt besonders für die Herzchirurgie. Transkatheter Aortenklappen-Implantation (TAKI) ist eine neue entwickelte Operationstechnik zur Behandlung der schweren Aortenklappen-Stenose bei alten und Hochrisiko-Patienten. Die Platzierung der Aortenklappenprothese ist entscheidend und wird in der Regel unter live-2D-fluoroskopischen Bildgebung durchgeführt. Zur Unterstützung der Platzierung der Prothese während des chirurgischen Eingriffs wurde in dieser Arbeit ein neues Fluoroskopie-basiertes TAKI Assistenzsystem entwickelt. Das entwickelte Assistenzsystem überlagert eine 3D-Geometrie des Aorten-Netzmodells und anatomischen Landmarken auf live-2D-fluoroskopische Bilder. Das 3D-Aorten-Netzmodell und die Landmarken werden auf Basis der interventionellen Angiographie und Fluoroskopie mittels eines C-Arm-CT-Systems rekonstruiert. Unter Verwendung dieser Aorten-Netzmodelle wird das Zielgebiet der Klappen-Implantation automatisch geschätzt. Mit Hilfe eines auf Template Matching basierenden Tracking-Ansatzes wird die Überlagerung des visualisierten 3D-Aorten-Netzmodells, der berechneten Landmarken und der Zielbereich der Implantation auf fluoroskopischen Bildern korrekt überlagert. Eine kompensation der Aortenwurzelbewegung erfolgt durch Bewegungsverfolgung eines Pigtail-Katheters in Bildsequenzen ohne Kontrastmittel. Eine starrere Intensitätsbasierte Registrierungsmethode wurde verwendet, um kontinuierlich die Aortenwurzelbewegung in Bildsequenzen mit Kontrastmittelgabe zu detektieren. Die Aortenklappenprothese wird in die fluoroskopischen Bilder eingeblendet und dient dem Chirurg als Leitfaden für die richtige Platzierung der realen Prothese. Eine interaktive Benutzerschnittstelle für den Chirurg wurde zur Initialisierung der Systemsalgorithmen, zur Steuerung der Visualisierung und für manuelle Korrektur eventueller Überlagerungsfehler entwickelt. Retrospektive Experimente wurden an mehreren Patienten-Datensätze aus der klinischen Routine der TAKI in einem Hybrid-OP durchgeführt. Hohe Erfolgsraten des entwickelten Assistenzsystems wurden für alle getesteten Patienten-Datensätze erzielt. Die Ergebnisse zeigen, dass das entwickelte chirurgische Assistenzsystem ein hilfreiches Werkzeug für den Chirurg bei der Platzierung Position der Prothese während des chirurgischen Eingriffs der TAKI bietet

    A Survey on 3D Ultrasound Reconstruction Techniques

    Get PDF
    This book chapter aims to discuss the 3D ultrasound reconstruction and visualization. First, the various types of 3D ultrasound system are reviewed, such as mechanical, 2D array, position tracking-based freehand, and untracked-based freehand. Second, the 3D ultrasound reconstruction technique or pipeline used by the current existing system, which includes the data acquisition, data preprocessing, reconstruction method and 3D visualization, is discussed. The reconstruction method and 3D visualization will be emphasized. The reconstruction method includes the pixel-based method, volume-based method, and function-based method, accompanied with their benefits and drawbacks. In the 3D visualization, methods such as multiplanar reformatting, volume rendering, and surface rendering are presented. Lastly, its application in the medical field is reviewed as well

    Ultrasonic Needle Tracking with Dynamic Electronic Focusing

    Get PDF
    Accurate identification of the needle tip is a key challenge with ultrasound-guided percutaneous interventions in regional anaesthesia, foetal surgery and cardiovascular medicine. In this study, we developed an ultrasonic needle tracking system in which the measured needle tip location was used to set the electronic focus of the external ultrasound imaging probe. In this system, needle tip tracking was enabled with a fibre-optic ultrasound sensor that was integrated into a needle stylet, and the A-lines recorded by the sensor were processed to generate tracking images of the needle tip. The needle tip position was estimated from the tracking images. The dependency of the tracking image on the electronic focal depth of the external ultrasound imaging probe was studied in a water bath and with needle insertions into a clinical training phantom. The variability in the estimated tracked position of the needle tip, with the needle tip at fixed depths in the imaging plane across a depth range from 0.5 to 7.5 cm, was studied. When the electronic focus was fixed, the variability of tracked position was found to increase with distance from that focus. The variability with the fixed focus was found to depend on the the relative distance between the needle tip and focal depth. It was found that with dynamic focusing, the maximum variability of tracked position was below 0.31 mm, as compared with 3.97 mm for a fixed focus

    Finite-element-method (FEM) model generation of time-resolved 3D echocardiographic geometry data for mitral-valve volumetry

    Get PDF
    INTRODUCTION: Mitral Valve (MV) 3D structural data can be easily obtained using standard transesophageal echocardiography (TEE) devices but quantitative pre- and intraoperative volume analysis of the MV is presently not feasible in the cardiac operation room (OR). Finite element method (FEM) modelling is necessary to carry out precise and individual volume analysis and in the future will form the basis for simulation of cardiac interventions. METHOD: With the present retrospective pilot study we describe a method to transfer MV geometric data to 3D Slicer 2 software, an open-source medical visualization and analysis software package. A newly developed software program (ROIExtract) allowed selection of a region-of-interest (ROI) from the TEE data and data transformation for use in 3D Slicer. FEM models for quantitative volumetric studies were generated. RESULTS: ROI selection permitted the visualization and calculations required to create a sequence of volume rendered models of the MV allowing time-based visualization of regional deformation. Quantitation of tissue volume, especially important in myxomatous degeneration can be carried out. Rendered volumes are shown in 3D as well as in time-resolved 4D animations. CONCLUSION: The visualization of the segmented MV may significantly enhance clinical interpretation. This method provides an infrastructure for the study of image guided assessment of clinical findings and surgical planning. For complete pre- and intraoperative 3D MV FEM analysis, three input elements are necessary: 1. time-gated, reality-based structural information, 2. continuous MV pressure and 3. instantaneous tissue elastance. The present process makes the first of these elements available. Volume defect analysis is essential to fully understand functional and geometrical dysfunction of but not limited to the valve. 3D Slicer was used for semi-automatic valve border detection and volume-rendering of clinical 3D echocardiographic data. FEM based models were also calculated. METHOD: A Philips/HP Sonos 5500 ultrasound device stores volume data as time-resolved 4D volume data sets. Data sets for three subjects were used. Since 3D Slicer does not process time-resolved data sets, we employed a standard movie maker to animate the individual time-based models and visualizations. Calculation time and model size were minimized. Pressures were also easily available. We speculate that calculation of instantaneous elastance may be possible using instantaneous pressure values and tissue deformation data derived from the animated FEM

    An interactive 3D medical visualization system based on a light field display

    Get PDF
    This paper presents a prototype medical data visualization system exploiting a light field display and custom direct volume rendering techniques to enhance understanding of massive volumetric data, such as CT, MRI, and PET scans. The system can be integrated with standard medical image archives and extends the capabilities of current radiology workstations by supporting real-time rendering of volumes of potentially unlimited size on light field displays generating dynamic observer-independent light fields. The system allows multiple untracked naked-eye users in a sufficiently large interaction area to coherently perceive rendered volumes as real objects, with stereo and motion parallax cues. In this way, an effective collaborative analysis of volumetric data can be achieved. Evaluation tests demonstrate the usefulness of the generated depth cues and the improved performance in understanding complex spatial structures with respect to standard techniques.883-893Pubblicat

    Towards Real-time Remote Processing of Laparoscopic Video

    Get PDF
    Laparoscopic surgery is a minimally invasive technique where surgeons insert a small video camera into the patient\u27s body to visualize internal organs and use small tools to perform these procedures. However, the benefit of small incisions has a disadvantage of limited visualization of subsurface tissues. Image-guided surgery (IGS) uses pre-operative and intra-operative images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic system is the daVinci-si robotic surgical vision system. The video streams generate approximately 360 megabytes of data per second, demonstrating a trend toward increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Real-time processing this large stream of data on a bedside PC, single or dual node setup, may be challenging and a high-performance computing (HPC) environment is not typically available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second rate (fps), it is required that each 11.9 MB (1080p) video frame be processed by a server and returned within the time this frame is displayed or 1/30th of a second. The ability to acquire, process, and visualize data in real time is essential for the performance of complex tasks as well as minimizing risk to the patient. We have implemented and compared performance of compression, segmentation and registration algorithms on Clemson\u27s Palmetto supercomputer using dual Nvidia graphics processing units (GPUs) per node and compute unified device architecture (CUDA) programming model. We developed three separate applications that run simultaneously: video acquisition, image processing, and video display. The image processing application allows several algorithms to run simultaneously on different cluster nodes and transfer images through message passing interface (MPI). Our segmentation and registration algorithms resulted in an acceleration factor of around 2 and 8 times respectively. To achieve a higher frame rate, we also resized images and reduced the overall processing time. As a result, using high-speed network to access computing clusters with GPUs to implement these algorithms in parallel will improve surgical procedures by providing real-time medical image processing and laparoscopic data

    GPU-based beamformer: Fast realization of plane wave compounding and synthetic aperture imaging

    Get PDF
    Although they show potential to improve ultrasound image quality, plane wave (PW) compounding and synthetic aperture (SA) imaging are computationally demanding and are known to be challenging to implement in real-time. In this work, we have developed a novel beamformer architecture with the real-time parallel processing capacity needed to enable fast realization of PW compounding and SA imaging. The beamformer hardware comprises an array of graphics processing units (GPUs) that are hosted within the same computer workstation. Their parallel computational resources are controlled by a pixel-based software processor that includes the operations of analytic signal conversion, delay-and-sum beamforming, and recursive compounding as required to generate images from the channel-domain data samples acquired using PW compounding and SA imaging principles. When using two GTX-480 GPUs for beamforming and one GTX-470 GPU for recursive compounding, the beamformer can compute compounded 512 × 255 pixel PW and SA images at throughputs of over 4700 fps and 3000 fps, respectively, for imaging depths of 5 cm and 15 cm (32 receive channels, 40 MHz sampling rate). Its processing capacity can be further increased if additional GPUs or more advanced models of GPU are used. © 2011 IEEE.published_or_final_versio
    corecore