1,065 research outputs found

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Real-time Prostate Motion Tracking For Robot-assisted Laparoscopic Radical Prostatectomy

    Get PDF
    Radical prostatectomy surgery (RP) is the gold standard for treatment of localized prostate cancer (PCa). Recently, emergence of minimally invasive techniques such as Laparoscopic Radical Prostatectomy (LRP) and Robot-Assisted Laparoscopic Radical Prostatectomy (RARP) has improved the outcomes for prostatectomy. However, it remains difficult for surgeons to make informed decisions regarding resection margins and nerve sparing since the location of the tumour within the organ is not usually visible in a laparoscopic view. While MRI enables visualization of the salient structures and cancer foci, its efficacy in LRP is reduced unless it is fused into a stereoscopic view such that homologous structures overlap. Registration of the MRI image and peri-operative ultrasound image either via visual manual alignment or using a fully automated registration can potentially be exploited to bring the pre-operative information into alignment with the patient coordinate system at the beginning of the procedure. While doing so, prostate motion needs to be compensated in real-time to synchronize the stereoscopic view with the pre-operative MRI during the prostatectomy procedure. In this thesis, two tracking methods are proposed to assess prostate rigid rotation and translation for the prostatectomy. The first method presents a 2D-to-3D point-to-line registration algorithm to measure prostate motion and translation with respect to an initial 3D TRUS image. The second method investigates a point-based stereoscopic tracking technique to compensate for rigid prostate motion so that the same motion can be applied to the pre-operative images

    Image guided robotic assistance for the diagnosis and treatment of tumor

    Get PDF
    The aim of this thesis is to demonstrate the feasibility and the potentiality of introduction of robotics and image guidance in the overall oncologic workflow, from the diagnosis to the treatment phase. The popularity of robotics in the operating room has grown in recent years. Currently the most popular systems is the da Vinci telemanipulator (Intuitive Surgical), it is based on a master-slave control, for minimally invasive surgery and it is used in several surgical fields such us urology, general, gynecology, cardiothoracic. An accurate study of this system, from a technological field of view, has been conducted addressing all drawbacks and advantages of this system. The da Vinci System creates an immersive operating environment for the surgeon by providing both high quality stereo visualization and a human-machine interface that directly connects the surgeon’s hands to the motion of the surgical tool tips inside the patient’s body. It has undoubted advantages for the surgeon work and for the patient health, at least for some interventions, while its very high costs leaves many doubts on its price benefit ratio. In the robotic surgery field many researchers are working on the optimization and miniaturization robots mechanic, while others are trying to obtain smart functionalities to realize robotic systems, that, “knowing” the patient anatomy from radiological images, can assists the surgeon in an active way. Regarding the second point, image guided systems can be useful to plan and to control medical robots motion and to provide the surgeon pre-operative and intra-operative images with augmented reality visualization to enhance his/her perceptual capacities and, as a consequence, to improve the quality of treatments. To demonstrate this thesis some prototypes has been designed, implemented and tested. The development of image guided medical devices, comprehensive of augmented reality, virtual navigation and robotic surgical features, requires to address several problems. The first ones are the choosing of the robotic platform and of the image source to employ. An industrial anthropomorphic arm has been used as testing platform. The idea of integrating industrial robot components in the clinical workflow has been supported by the da Vinci technical analysis. The algorithms and methods developed, regarding in particular robot calibration, based on literature theories and on an easily integration in the clinical scenario, can be adapted to each anthropomorphic arm. In this way this work can be integrated with light-weight robots, for industrial or clinical use, able to work in close contact to humans, which will become numerous in the early future. Regarding the medical image source, it has been decided to work with ultrasound imaging. Two-dimensional ultrasound imaging is widely used in clinical practice because is not dangerous for the patient, inexpensive, compact and it is a highly flexible imaging that allows users to study many anatomic structures. It is routinely used for diagnosis and as guidance in percutaneous treatments. However the use of 2D ultrasound imaging presents some disadvantages that require great ability of the user: it requires that the clinician mentally integrates many images to reconstruct a complete idea of the anatomy in 3D. Furthermore the freehand control of the probe make it difficult to individuate anatomic positions and orientations and probe repositioning to reach a particular location. To overcome these problems it has been developed an image guided system that fuse 2D US real time images with routinely CT or MRI 3D images, previously acquired from the patient, to enhance clinician orientation and probe guidance. The implemented algorithms for robot calibration and US image guidance has been used to realize two applications responding to specific clinical needs. The first one to speed up the execution of routinely and very recurrently procedures like percutaneous biopsy or ablation. The second one to improve a new completely non invasive type of treatment for solid tumors, the HIFU (High Intensity Focused Ultrasound). An ultrasound guided robotic system has been developed to assist the clinician to execute complicated biopsies, or percutaneous ablations, in particular for deep abdominal organs. It was developed an integrated system that provides the clinician two types of assistance: a mixed reality visualization allows accurate and easy planning of needle trajectory and target reaching verification; the robot arm equipped with a six-degree-of-freedom force sensor allows the precise positioning of the needle holder and allows the clinician to adjust, by means of a cooperative control, the planned trajectory to overcome needle deflection and target motion. The second application consists in an augmented reality navigation system for HIFU treatment. HIFU represents a completely non invasive method for treatment of solid tumors, hemostasis and other vascular features in human tissues. The technology for HIFU treatments is still evolving and the systems available on the market have some limitations and drawbacks. A disadvantage resulting from our experience with the machinery available in our hospital (JC200 therapeutic system Haifu (HIFU) by Tech Co., Ltd, Chongqing), which is similar to other analogous machines, is the long time required to perform the procedure due to the difficulty to find the target, using the remote motion of an ultrasound probe under the patient. This problem has been addressed developing an augmented reality navigation system to enhance US guidance during HIFU treatments allowing an easy target localization. The system was implemented using an additional free hand ultrasound probe coupled with a localizer and CT fused imaging. It offers a simple and an economic solution to an easy HIFU target localization. This thesis demonstrates the utility and usability of robots for diagnosis and treatment of the tumor, in particular the combination of automatic positioning and cooperative control allows the surgeon and the robot to work in synergy. Further the work demonstrates the feasibility and the potentiality of the use of a mixed reality navigation system to facilitate the target localization and consequently to reduce the times of sittings, to increase the number of possible diagnosis/treatments and to decrease the risk of potential errors. The proposed solutions for the integration of robotics and image guidance in the overall oncologic workflow, take into account current available technologies, traditional clinical procedures and cost minimization

    Augmented Reality Ultrasound Guidance in Anesthesiology

    Get PDF
    Real-time ultrasound has become a mainstay in many image-guided interventions and increasingly popular in several percutaneous procedures in anesthesiology. One of the main constraints of ultrasound-guided needle interventions is identifying and distinguishing the needle tip from needle shaft in the image. Augmented reality (AR) environments have been employed to address challenges surrounding surgical tool visualization, navigation, and positioning in many image-guided interventions. The motivation behind this work was to explore the feasibility and utility of such visualization techniques in anesthesiology to address some of the specific limitations of ultrasound-guided needle interventions. This thesis brings together the goals, guidelines, and best development practices of functional AR ultrasound image guidance (AR-UIG) systems, examines the general structure of such systems suitable for applications in anesthesiology, and provides a series of recommendations for their development. The main components of such systems, including ultrasound calibration and system interface design, as well as applications of AR-UIG systems for quantitative skill assessment, were also examined in this thesis. The effects of ultrasound image reconstruction techniques, as well as phantom material and geometry on ultrasound calibration, were investigated. Ultrasound calibration error was reduced by 10% with synthetic transmit aperture imaging compared with B-mode ultrasound. Phantom properties were shown to have a significant effect on calibration error, which is a variable based on ultrasound beamforming techniques. This finding has the potential to alter how calibration phantoms are designed cognizant of the ultrasound imaging technique. Performance of an AR-UIG guidance system tailored to central line insertions was evaluated in novice and expert user studies. While the system outperformed ultrasound-only guidance with novice users, it did not significantly affect the performance of experienced operators. Although the extensive experience of the users with ultrasound may have affected the results, certain aspects of the AR-UIG system contributed to the lackluster outcomes, which were analyzed via a thorough critique of the design decisions. The application of an AR-UIG system in quantitative skill assessment was investigated, and the first quantitative analysis of needle tip localization error in ultrasound in a simulated central line procedure, performed by experienced operators, is presented. Most participants did not closely follow the needle tip in ultrasound, resulting in 42% unsuccessful needle placements and a 33% complication rate. Compared to successful trials, unsuccessful procedures featured a significantly greater (p=0.04) needle-tip to image-plane distance. Professional experience with ultrasound does not necessarily lead to expert level performance. Along with deliberate practice, quantitative skill assessment may reinforce clinical best practices in ultrasound-guided needle insertions. Based on the development guidelines, an AR-UIG system was developed to address the challenges in ultrasound-guided epidural injections. For improved needle positioning, this system integrated A-mode ultrasound signal obtained from a transducer housed at the tip of the needle. Improved needle navigation was achieved via enhanced visualization of the needle in an AR environment, in which B-mode and A-mode ultrasound data were incorporated. The technical feasibility of the AR-UIG system was evaluated in a preliminary user study. The results suggested that the AR-UIG system has the potential to outperform ultrasound-only guidance

    Navigation system based in motion tracking sensor for percutaneous renal access

    Get PDF
    Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and reach the anatomical target. Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure on both patients and physicians. To solve the referred problems this thesis aims to develop a new hardware/software framework to intuitively and safely guide the surgeon during PRA planning and puncturing. In terms of surgical planning, a set of methodologies were developed to increase the certainty of reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local optimization problem using the minimum description length principle and image statistical properties. A multi-volume Ray Cast method was then used to highlight each segmented volume. Results show that it is possible to detect all abdominal structures surrounding the kidney, with the ability to correctly estimate a virtual trajectory. Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution aids in establishing the desired puncture site and choosing the best virtual puncture trajectory. However, this system required a line of sight to different optical markers placed at the needle base, limiting the accuracy when tracking inside the human body. Results show that the needle tip can deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex registration procedure and initial setup is needed. On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter was inserted trans-urethrally towards the renal target. This catheter has a position and orientation electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds). Such results represent a puncture time improvement between 75% and 85% when comparing to state of the art methods. 3D sound and vibrotactile feedback were also developed to provide additional information about the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory, being able to anticipate any movement even without looking to a monitor. Best results show that 3D sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of 10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average angulation error of 8.0º degrees. Additionally to the EMT framework, three circular ultrasound transducers were built with a needle working channel. One explored different manufacture fabrication setups in terms of the piezoelectric materials, transducer construction, single vs. multi array configurations, backing and matching material design. The A-scan signals retrieved from each transducer were filtered and processed to automatically detect reflected echoes and to alert the surgeon when undesirable anatomical structures are in between the puncture path. The transducers were mapped in a water tank and tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area oscillates around the ceramics radius and it was possible to automatically detect echo signals in phantoms with length higher than 80 mm. Hereupon, it is expected that the introduction of the proposed system on the PRA procedure, will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the developed framework has the potential to make the PRA free of radiation for both patient and surgeon and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico. Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto na diminuição da dose exposta aos pacientes e cirurgiões. De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião durante o planeamento e punção do ARP. Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através de um problema de optimização global com base no princípio de “minimum description length” e propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para estimar corretamente uma trajetória virtual. No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida que vai sendo inserida, com erros superiores a 3 mm. Por outro lado, foi desenvolvida e testada uma solução com base em sensores eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente (variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais. Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação de 10.4º e 8.0 graus, respetivamente. Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim, alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm. Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos especializados.The present work was only possible thanks to the support by the Portuguese Science and Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Research on real-time physics-based deformation for haptic-enabled medical simulation

    Full text link
    This study developed a multiple effective visuo-haptic surgical engine to handle a variety of surgical manipulations in real-time. Soft tissue models are based on biomechanical experiment and continuum mechanics for greater accuracy. Such models will increase the realism of future training systems and the VR/AR/MR implementations for the operating room
    corecore