6,031 research outputs found

    Crepuscular Rays for Tumor Accessibility Planning

    Get PDF

    Medical image computing and computer-aided medical interventions applied to soft tissues. Work in progress in urology

    Full text link
    Until recently, Computer-Aided Medical Interventions (CAMI) and Medical Robotics have focused on rigid and non deformable anatomical structures. Nowadays, special attention is paid to soft tissues, raising complex issues due to their mobility and deformation. Mini-invasive digestive surgery was probably one of the first fields where soft tissues were handled through the development of simulators, tracking of anatomical structures and specific assistance robots. However, other clinical domains, for instance urology, are concerned. Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU, radiofrequency, or cryoablation), increasingly early detection of cancer, and use of interventional and diagnostic imaging modalities, recently opened new challenges to the urologist and scientists involved in CAMI. This resulted in the last five years in a very significant increase of research and developments of computer-aided urology systems. In this paper, we propose a description of the main problems related to computer-aided diagnostic and therapy of soft tissues and give a survey of the different types of assistance offered to the urologist: robotization, image fusion, surgical navigation. Both research projects and operational industrial systems are discussed

    Constrained Motion Planning System for MRI-Guided, Needle-Based, Robotic Interventions

    Get PDF
    In needle-based surgical interventions, accurate alignment and insertion of the tool is paramount for providing proper treatment at a target site while minimizing healthy tissue damage. While manually-aligned interventions are well-established, robotics platforms promise to reduce procedure time, increase precision, and improve patient comfort and survival rates. Conducting interventions in an MRI scanner can provide real-time, closed-loop feedback for a robotics platform, improving its accuracy, yet the tight environment potentially impairs motion, and perceiving this limitation when planning a procedure can be challenging. This project developed a surgical workflow and software system for evaluating the workspace and planning the motions of a robotics platform within the confines of an MRI scanner. 3D Slicer, a medical imaging visualization and processing platform, provided a familiar and intuitive interface for operators to quickly plan procedures with the robotics platform over OpenIGTLink. Robotics tools such as ROS and MoveIt! were utilized to analyze the workspace of the robot within the patient and formulate the motion planning solution for positioning of the robot during surgical procedures. For this study, a 7 DOF robot arm designed for ultrasonic ablation of brain tumors was the targeted platform. The realized system successfully yielded prototype capabilities on the neurobot for conducting workspace analysis and motion planning, integrated systems using OpenIGTLink, provided an opportunity to evaluate current software packages, and informed future work towards production-grade medical software for MRI-guided, needle-based robotic interventions

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature

    Get PDF
    © 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe

    Navigation system based in motion tracking sensor for percutaneous renal access

    Get PDF
    Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and reach the anatomical target. Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure on both patients and physicians. To solve the referred problems this thesis aims to develop a new hardware/software framework to intuitively and safely guide the surgeon during PRA planning and puncturing. In terms of surgical planning, a set of methodologies were developed to increase the certainty of reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local optimization problem using the minimum description length principle and image statistical properties. A multi-volume Ray Cast method was then used to highlight each segmented volume. Results show that it is possible to detect all abdominal structures surrounding the kidney, with the ability to correctly estimate a virtual trajectory. Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution aids in establishing the desired puncture site and choosing the best virtual puncture trajectory. However, this system required a line of sight to different optical markers placed at the needle base, limiting the accuracy when tracking inside the human body. Results show that the needle tip can deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex registration procedure and initial setup is needed. On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter was inserted trans-urethrally towards the renal target. This catheter has a position and orientation electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds). Such results represent a puncture time improvement between 75% and 85% when comparing to state of the art methods. 3D sound and vibrotactile feedback were also developed to provide additional information about the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory, being able to anticipate any movement even without looking to a monitor. Best results show that 3D sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of 10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average angulation error of 8.0º degrees. Additionally to the EMT framework, three circular ultrasound transducers were built with a needle working channel. One explored different manufacture fabrication setups in terms of the piezoelectric materials, transducer construction, single vs. multi array configurations, backing and matching material design. The A-scan signals retrieved from each transducer were filtered and processed to automatically detect reflected echoes and to alert the surgeon when undesirable anatomical structures are in between the puncture path. The transducers were mapped in a water tank and tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area oscillates around the ceramics radius and it was possible to automatically detect echo signals in phantoms with length higher than 80 mm. Hereupon, it is expected that the introduction of the proposed system on the PRA procedure, will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the developed framework has the potential to make the PRA free of radiation for both patient and surgeon and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico. Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto na diminuição da dose exposta aos pacientes e cirurgiões. De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião durante o planeamento e punção do ARP. Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através de um problema de optimização global com base no princípio de “minimum description length” e propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para estimar corretamente uma trajetória virtual. No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida que vai sendo inserida, com erros superiores a 3 mm. Por outro lado, foi desenvolvida e testada uma solução com base em sensores eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente (variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais. Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação de 10.4º e 8.0 graus, respetivamente. Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim, alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm. Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos especializados.The present work was only possible thanks to the support by the Portuguese Science and Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN

    Robotically Steered Needles: A Survey of Neurosurgical Applications and Technical Innovations

    Get PDF
    This paper surveys both the clinical applications and main technical innovations related to steered needles, with an emphasis on neurosurgery. Technical innovations generally center on curvilinear robots that can adopt a complex path that circumvents critical structures and eloquent brain tissue. These advances include several needle-steering approaches, which consist of tip-based, lengthwise, base motion-driven, and tissue-centered steering strategies. This paper also describes foundational mathematical models for steering, where potential fields, nonholonomic bicycle-like models, spring models, and stochastic approaches are cited. In addition, practical path planning systems are also addressed, where we cite uncertainty modeling in path planning, intraoperative soft tissue shift estimation through imaging scans acquired during the procedure, and simulation-based prediction. Neurosurgical scenarios tend to emphasize straight needles so far, and span deep-brain stimulation (DBS), stereoelectroencephalography (SEEG), intracerebral drug delivery (IDD), stereotactic brain biopsy (SBB), stereotactic needle aspiration for hematoma, cysts and abscesses, and brachytherapy as well as thermal ablation of brain tumors and seizure-generating regions. We emphasize therapeutic considerations and complications that have been documented in conjunction with these applications

    Inviwo -- A Visualization System with Usage Abstraction Levels

    Full text link
    The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities

    Retrospective evaluation and SEEG trajectory analysis for interactive multi-trajectory planner assistant

    Get PDF
    Purpose: Focal epilepsy is a neurological disease that can be surgically treated by removing area of the brain generating the seizures. The stereotactic electroencephalography (SEEG) procedure allows patient brain activity to be recorded in order to localize the onset of seizures through the placement of intracranial electrodes. The planning phase can be cumbersome and very time consuming, and no quantitative information is provided to neurosurgeons regarding the safety and efficacy of their trajectories. In this work, we present a novel architecture specifically designed to ease the SEEG trajectory planning using the 3D Slicer platform as a basis. Methods: Trajectories are automatically optimized following criteria like vessel distance and insertion angle. Multi-trajectory optimization and conflict resolution are optimized through a selective brute force approach based on a conflict graph construction. Additionally, electrode-specific optimization constraints can be defined, and an advanced verification module allows neurosurgeons to evaluate the feasibility of the trajectory. Results: A retrospective evaluation was performed using manually planned trajectories on 20 patients: the planning algorithm optimized and improved trajectories in 98% of cases. We were able to resolve and optimize the remaining 2% by applying electrode-specific constraints based on manual planning values. In addition, we found that the global parameters used discards 68% of the manual planned trajectories, even when they represent a safe clinical choice. Conclusions: Our approach improved manual planned trajectories in 98% of cases in terms of quantitative indexes, even when applying more conservative criteria with respect to actual clinical practice. The improved multi-trajectory strategy overcomes the previous work limitations and allows electrode optimization within a tolerable time span

    Vision based trajectory planning for robotic assisted fetal surgery treatint TTTS

    Get PDF
    Medical Robots is the field focused on improving and making easier the work of medical personnel in certain interventions with the help of robotic systems. Because of this, although some of them are already in use, there is a large number of R&D projects, especially in soft tissues. In this case, this project deals with the improvement of fetal surgery, more specifically in the treatment of Twin-to-Twin Transfusion Syndrome. TTTS is a syndrome that affects pregnancy in twins. When it occurs, during the development of fetuses, there is an interconnection between the blood vessels of both in points called anastomosis. These connections cause the exchange of blood flow between both fetuses and, if it is not treated by fetal surgery, results in the death of both twins. A teleoperated robotic system is being developed in the ESAII laboratory to provide help and assistance to the surgeon during these surgeries. In this project an automation of the robotic system is implemented. It is done by means of using the work environment information collected and Computer Vision tools. The objective is to create an automatic movement of the robot through the fastest and safest path from one point to other over the placenta´s surface. This report details everything about the project development. It is also described the main topics related to the project as the global robotic system, designed and made in the ESAII laboratory of the UPC; the fetal surgery and the current state of the art around this type of medical robots
    corecore