704 research outputs found
An Experimental and Numerical Study on Tactile Neuroimaging: A Novel Minimally Invasive Technique for Intraoperative Brain Imaging
This is the peer reviewed version of the following article:
Moslem Sadeghi-Goughari, Yanjun Qian, Soo Jeon, Sohrab Sadeghi and Hyock-Ju Kwon, “An Experimental and Numerical Study on Tactile Neuroimaging: A Novel Minimally Invasive Technique for Intraoperative Brain Imaging,” accepted to The International Journal of Medical Robotics and Computer Assisted Surgery which has been published in final form at: https://doi.org/10.1002/rcs.1893. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving.Background
The success of tumor neurosurgery is highly dependent on the ability to accurately localize the operative target, which may be shifted during the operation. Performing an intraoperative brain imaging is crucial in minimally invasive neurosurgery to detect the effect of brain shift on the tumor’s location, and to maximize the efficiency of tumor resection.
Method
The major objective of this research is to introduce the tactile neuroimaging as a novel minimally invasive technique for intraoperative brain imaging. To investigate the feasibility of the proposed method, an experimental and numerical study was first performed on silicone phantoms mimicking the brain tissue with a tumor. Then the study was extended to a clinical model with the meningioma tumor.
Results
The stress distribution on the brain surface has high potential to intraoperatively localize the tumor.
Conclusion
Results suggest that tactile neuroimaging can be used to provide a non-invasive, and real-time intraoperative data on tumor’s features.Natural Sciences and Engineering Research Council || RGPIN/2015-05273, RGPIN/2015-04118, RGPAS/354703-201
Real-time hybrid cutting with dynamic fluid visualization for virtual surgery
It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery
Microscope Embedded Neurosurgical Training and Intraoperative System
In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results.
In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point.
The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery.
This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient.
The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability.
Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery.
All the components are open source or at least based on a GPL license
InterNAV3D: A Navigation Tool for Robot-Assisted Needle-Based Intervention for the Lung
Lung cancer is one of the leading causes of cancer deaths in North America. There are recent advances in cancer treatment techniques that can treat cancerous tumors, but require a real-time imaging modality to provide intraoperative assistive feedback. Ultrasound (US) imaging is one such modality. However, while its application to the lungs has been limited because of the deterioration of US image quality (due to the presence of air in the lungs); recent work has shown that appropriate lung deflation can help to improve the quality sufficiently to enable intraoperative, US-guided robotics-assisted techniques to be used. The work described in this thesis focuses on this approach.
The thesis describes a project undertaken at Canadian Surgical Technologies and Advanced Robotics (CSTAR) that utilizes the image processing techniques to further enhance US images and implements an advanced 3D virtual visualization software approach. The application considered is that for minimally invasive lung cancer treatment using procedures such as brachytherapy and microwave ablation while taking advantage of the accuracy and teleoperation capabilities of surgical robots, to gain higher dexterity and precise control over the therapy tools (needles and probes). A number of modules and widgets are developed and explained which improve the visibility of the physical features of interest in the treatment and help the clinician to have more reliable and accurate control of the treatment. Finally the developed tools are validated with extensive experimental evaluations and future developments are suggested to enhance the scope of the applications
Algoritmos generales para simuladores de cirugía laparoscópica
Recent advances in fields such as modeling of deformable objects, haptic technologies, immersive technologies,
computation capacity and virtual environments have created the conditions to offer novel and suitable training tools and learning methods
in the medical area. One of these training tools is the virtual surgical simulator, which has no limitations of time or risk, unlike conventional
methods of training. Moreover, these simulators allow for the quantitative evaluation of the surgeon performance, giving the possibility to
create performance standards in order to define if the surgeon is well prepared to execute a determined surgical procedure on a real patient.
This paper describes the development of a virtual simulator for laparoscopic surgery. The simulator allows the multimodal
interaction between the surgeon and the surgical virtual environment using visual and haptic feedback devices. To make the
experience of the surgeon closer to the real surgical environment a specific user interface was developed. Additionally in this paper
we describe some implementations carried out to face typical challenges presented in surgical simulators related to the tradeoff
between real-time performance and high realism; for instance, the deformation of soft tissues are simulated using a GPU (Graphics
Processor Unit) -based implementation of the mass-spring model. In this case, we explain the algorithms developed taking into
account the particular case of a cholecystectomy procedure in laparoscopic surgery.Recientes avances en áreas tales como modelación computacional de objetos deformables, tecnologías hápticas, tecnologías
inmersivas, capacidad de procesamiento y ambiente virtuales han proporcionado las bases para el desarrollo de herramientas y métodos de
aprendizaje confiables en el entrenamiento médico. Una de estas herramientas de entrenamiento son los simuladores quirúrgicos virtuales,
los cuales no tienen limitaciones de tiempo o riesgos a diferencia de los métodos convencionales de entrenamiento. Además, dichos
simuladores permiten una evaluación cuantitativa del desempeño del cirujano, dando la posibilidad de crear estándares de desempeño con
el fin de definir en qué momento un cirujano está preparado para realizar un determinado procedimiento quirúrgico sobre un paciente.
Este artículo describe el desarrollo de un simulador virtual para cirugía laparoscópica. Este simulador permite la interacción
multimodal entre el cirujano y el ambiente virtual quirúrgico usando dispositivos de retroalimentación visual y háptica. Para hacer
la experiencia del cirujano más cercana a la de una ambiente quirúrgico real se desarrolló una interfaz cirujano-simulador especial.
Adicionalmente en este artículo se describen algunas implementaciones que solucionan los problemas típicos cuando se desarrolla un
simulador quirúrgico, principalmente relacionados con lograr un desempeño en tiempo real mientras se sacrifica el nivel de realismo
de la simulación: por ejemplo, la deformación de los tejidos blandos simulados usando una implementación del modelo masa-resorte
en la unidad de procesamiento gráfico. En este caso se describen los algoritmos desarrollados tomando en cuenta la simulación de un
procedimiento laparoscópico llamado colecistectomía
imaged-based tip force estimation on steerable intracardiac catheters using learning-based methods
Minimally invasive surgery has turned into the most commonly used approach to treat cardiovascular diseases during the surgical procedure; it is hypothesized that the absence of haptic (tactile) feedback and force presented to surgeons is a restricting factor. The use of ablation catheters with the integrated sensor at the tip results in high cost and noise complications. In this thesis, two sensor-less methods are proposed to estimate the force at the intracardiac catheter’s tip. Force estimation at the catheter tip is of great importance because insufficient force in ablation treatment may result in incomplete treatment and excessive force leads to damaging the heart chamber. Besides, adding the sensor to intracardiac catheters adds complexity to their structures. This thesis is categorized into two sensor-less approaches: 1- Learning-Based Force Estimation for Intracardiac Ablation Catheters, 2- A Deep-Learning Force Estimator System for Intracardiac Catheters. The first proposed method estimates catheter-tissue contact force by learning the deflected shape of the catheter tip section image. A regression model is developed based on predictor variables of tip curvature coefficients and knob actuation. The learning-based approach achieved force predictions in close agreement with experimental contact force measurements. The second approach proposes a deep learning method to estimate the contact forces directly from the catheter’s image tip. A convolutional neural network extracts the catheter’s deflection through input images and translates them into the corresponding forces. The ResNet graph was implemented as the architecture of the proposed model to perform a regression. The model can estimate catheter-tissue contact force based on the input images without utilizing any feature extraction or pre-processing. Thus, it can estimate the force value regardless of the tip displacement and deflection shape. The evaluation results show that the proposed method can elicit a robust model from the specified data set and approximate the force with appropriate accuracy
Polyvinylidene fluoride - based MEMS tactile sensor for minimally invasive surgery
Minimally invasive surgery (MIS) procedures have been growing rapidly for the past couple of decades. In MIS operations, endoscopic tools are inserted through a small incision on human's body. Although these procedures have many advantages such as fast recovery time, minimum damage to human body and reduced post operative complications, it does not provide any tactile feedback to the surgeon. This thesis reports on design, finite element analysis, fabrication and testing of a micromachined piezoelectric endoscopic tactile sensor. Similar to the commercial endoscopic graspers, the sensor is teeth like in order to grasp slippery tissues. It consists of three layers; the first layer is a silicon layer of teeth shapes on the top and two supports at the bottom forming a thin plate and a U-Channel. The second layer is a patterned Polyvinylidene Fluoride (PVDF) film, and the third layer is a supporting Plexiglas. The patterned PVDF film was placed on the middle between the other two layers. When a concentric load is applied to the sensor, the magnitude and the position of the applied load are obtained from the outputs of the sensing elements which are sandwiched between the silicon supports and the Plexiglas. In addition, when a soft object/tissue is place on the sensor and load is applied the degree of the softness/compliance of the object is obtained from the outputs from the middle PVDF sensing elements, which are glued to the back of the thin silicon plate. The outputs are related to the deformation of the silicon plate which related to the contacting object softness. The sensor has high sensitivity and high dynamic range as a result it can potentially detect a small dynamic load such as a pulse load as well as a high load such as a firm grasping of a tissue by an endoscopic grasper. The entire surface of the tactile sensor is also active, which is an advantage in detecting the precise position of the applied point load on the grasper. The finite element analysis and experimental results are in close agreement with each other. The sensor can potentially be integrated with the gasper of a commercially available endoscopic graspe
Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery
Minimally invasive surgery is playing an increasingly important role for patient
care. Whilst its direct patient benefit in terms of reduced trauma,
improved recovery and shortened hospitalisation has been well established,
there is a sustained need for improved training of the existing procedures
and the development of new smart instruments to tackle the issue of visualisation,
ergonomic control, haptic and tactile feedback. For endoscopic
intervention, the small field of view in the presence of a complex anatomy
can easily introduce disorientation to the operator as the tortuous access
pathway is not always easy to predict and control with standard endoscopes.
Effective training through simulation devices, based on either virtual reality
or mixed-reality simulators, can help to improve the spatial awareness,
consistency and safety of these procedures.
This thesis examines the use of endoscopic videos for both simulation
and navigation purposes. More specifically, it addresses the challenging
problem of how to build high-fidelity subject-specific simulation environments
for improved training and skills assessment. Issues related to mesh
parameterisation and texture blending are investigated. With the maturity
of computer vision in terms of both 3D shape reconstruction and localisation
and mapping, vision-based techniques have enjoyed significant interest
in recent years for surgical navigation. The thesis also tackles the problem
of how to use vision-based techniques for providing a detailed 3D map and
dynamically expanded field of view to improve spatial awareness and avoid
operator disorientation. The key advantage of this approach is that it does
not require additional hardware, and thus introduces minimal interference
to the existing surgical workflow. The derived 3D map can be effectively
integrated with pre-operative data, allowing both global and local 3D navigation
by taking into account tissue structural and appearance changes.
Both simulation and laboratory-based experiments are conducted throughout
this research to assess the practical value of the method proposed
Simulation Guidée par l’Image pour la Réalité Augmentée durant la Chirurgie Hépatique
The main objective of this thesis is to provide surgeons with tools for pre and intra-operative decision support during minimally invasive hepaticsurgery. These interventions are usually based on laparoscopic techniques or, more recently, flexible endoscopy. During such operations, the surgeon tries to remove a significant number of liver tumors while preserving the functional role of the liver. This involves defining an optimal hepatectomy, i.e. ensuring that the volume of post-operative liver is at least at 55% of the original liver and the preserving at hepatic vasculature. Although intervention planning can now be considered on the basis of preoperative patient-specific, significant movements of the liver and its deformations during surgery data make this very difficult to use planning in practice. The work proposed in this thesis aims to provide augmented reality tools to be used in intra-operative conditions in order to visualize the position of tumors and hepatic vascular networks at any time.L’objectif principal de cette thèse est de fournir aux chirurgiens des outils d’aide à la décision pré et per-opératoire lors d’interventions minimalement invasives en chirurgie hépatique. Ces interventions reposent en général sur des techniques de laparoscopie ou plus récemment d’endoscopie flexible. Lors de telles interventions, le chirurgien cherche à retirer un nombre souvent important de tumeurs hépatiques, tout en préservant le rôle fonctionnel du foie. Cela implique de définir une hépatectomie optimale, c’est à dire garantissant un volume du foie post-opératoire d’au moins 55% du foie initial et préservant au mieux la vascularisation hépatique. Bien qu’une planification de l’intervention puisse actuellement s’envisager sur la base de données pré-opératoire spécifiques au patient, les mouvements importants du foie et ses déformations lors de l’intervention rendent cette planification très difficile à exploiter en pratique. Les travaux proposés dans cette thèse visent à fournir des outils de réalité augmentée utilisables en conditions per-opératoires et permettant de visualiser à chaque instant la position des tumeurs et réseaux vasculaires hépatiques
Navigation system based in motion tracking sensor for percutaneous renal access
Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal
diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these
procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and
reach the anatomical target.
Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray
based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to
the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure
on both patients and physicians.
To solve the referred problems this thesis aims to develop a new hardware/software framework
to intuitively and safely guide the surgeon during PRA planning and puncturing.
In terms of surgical planning, a set of methodologies were developed to increase the certainty of
reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were
automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local
optimization problem using the minimum description length principle and image statistical
properties. A multi-volume Ray Cast method was then used to highlight each segmented volume.
Results show that it is possible to detect all abdominal structures surrounding the kidney, with the
ability to correctly estimate a virtual trajectory.
Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution
were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution
aids in establishing the desired puncture site and choosing the best virtual puncture trajectory.
However, this system required a line of sight to different optical markers placed at the needle base,
limiting the accuracy when tracking inside the human body. Results show that the needle tip can
deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex
registration procedure and initial setup is needed.
On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter
was inserted trans-urethrally towards the renal target. This catheter has a position and orientation
electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture
trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and
ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds).
Such results represent a puncture time improvement between 75% and 85% when comparing to
state of the art methods.
3D sound and vibrotactile feedback were also developed to provide additional information about
the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to
follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory,
being able to anticipate any movement even without looking to a monitor. Best results show that 3D
sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of
10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average
angulation error of 8.0º degrees.
Additionally to the EMT framework, three circular ultrasound transducers were built with a needle
working channel. One explored different manufacture fabrication setups in terms of the piezoelectric
materials, transducer construction, single vs. multi array configurations, backing and matching
material design. The A-scan signals retrieved from each transducer were filtered and processed to
automatically detect reflected echoes and to alert the surgeon when undesirable anatomical
structures are in between the puncture path. The transducers were mapped in a water tank and
tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area
oscillates around the ceramics radius and it was possible to automatically detect echo signals in
phantoms with length higher than 80 mm.
Hereupon, it is expected that the introduction of the proposed system on the PRA procedure,
will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing
surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the
developed framework has the potential to make the PRA free of radiation for both patient and surgeon
and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e
diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e
desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente
relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico.
Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte
das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas
representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto
na diminuição da dose exposta aos pacientes e cirurgiões.
De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento
de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião
durante o planeamento e punção do ARP.
Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a
aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais
relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através
de um problema de optimização global com base no princípio de “minimum description length” e
propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções
de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que
é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para
estimar corretamente uma trajetória virtual.
No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção
de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução
baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor
trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes
marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada
no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida
que vai sendo inserida, com erros superiores a 3 mm.
Por outro lado, foi desenvolvida e testada uma solução com base em sensores
eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor
semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os
sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para
puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente
(variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo
de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais.
Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer
informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o
cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz
de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração
podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação
de 10.4º e 8.0 graus, respetivamente.
Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de
ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes
configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou
singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor
foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim,
alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os
transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados
mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os
ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm.
Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá
conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e
reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este
sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos
especializados.The present work was only possible thanks to the support by the Portuguese Science and
Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by
FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa
COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN
- …