2,417 research outputs found
Sonification as a Reliable Alternative to Conventional Visual Surgical Navigation
Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel sonification solution for alignment tasks in four degrees of freedom based on frequency modulation (FM) synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution
Image-guided Breast Biopsy of MRI-visible Lesions with a Hand-mounted Motorised Needle Steering Tool
A biopsy is the only diagnostic procedure for accurate histological
confirmation of breast cancer. When sonographic placement is not feasible, a
Magnetic Resonance Imaging(MRI)-guided biopsy is often preferred. The lack of
real-time imaging information and the deformations of the breast make it
challenging to bring the needle precisely towards the tumour detected in
pre-interventional Magnetic Resonance (MR) images. The current manual
MRI-guided biopsy workflow is inaccurate and would benefit from a technique
that allows real-time tracking and localisation of the tumour lesion during
needle insertion. This paper proposes a robotic setup and software architecture
to assist the radiologist in targeting MR-detected suspicious tumours. The
approach benefits from image fusion of preoperative images with intraoperative
optical tracking of markers attached to the patient's skin. A hand-mounted
biopsy device has been constructed with an actuated needle base to drive the
tip toward the desired direction. The steering commands may be provided both by
user input and by computer guidance. The workflow is validated through phantom
experiments. On average, the suspicious breast lesion is targeted with a radius
down to 2.3 mm. The results suggest that robotic systems taking into account
breast deformations have the potentials to tackle this clinical challenge.Comment: Submitted to 2021 International Symposium on Medical Robotics (ISMR
Three-Dimensional Sonification as a Surgical Guidance Tool
Interactive Sonification is a well-known guidance method in navigation tasks.
Researchers have repeatedly suggested the use of interactive sonification in
neuronavigation and image-guided surgery. The hope is to reduce clinicians'
cognitive load through a relief of the visual channel, while preserving the
precision provided through image guidance. In this paper, we present a surgical
use case, simulating a craniotomy preparation with a skull phantom. Through
auditory, visual, and audiovisual guidance, non-clinicians successfully find
targets on a skull that provides hardly any visual or haptic landmarks. The
results show that interactive sonification enables novice users to navigate
through three-dimensional space with a high precision. The precision along the
depth axis is highest in the audiovisual guidance mode, but adding audio leads
to higher durations and longer motion trajectories.Comment: research pape
Advances in real-time thoracic guidance systems
Substantial tissue motion: \u3e1cm) arises in the thoracic/abdominal cavity due to respiration. There are many clinical applications in which localizing tissue with high accuracy: \u3c1mm) is important. Potential applications include radiation therapy, radio frequency ablation, lung/liver biopsies, and brachytherapy seed placement. Recent efforts have made highly accurate sub-mm 3D localization of discrete points available via electromagnetic: EM) position monitoring. Technology from Calypso Medical allows for simultaneous tracking of up to three implanted wireless transponders. Additionally, Medtronic Navigation uses wired electromagnetic tracking to guide surgical tools for image guided surgery: IGS). Utilizing real-time EM position monitoring, a prototype system was developed to guide a therapeutic linear accelerator to follow a moving target: tumor) within the lung/abdomen. In a clinical setting, electromagnetic transponders would be bronchoscopically implanted into the lung of the patient in or near the tumor. These transponders would ax to the lung tissue in a stable manner and allow real-time position knowledge throughout a course of radiation therapy. During each dose of radiation, the beam is either halted when the target is outside of a given threshold, or in a later study the beam follows the target in real-time based on the EM position monitoring. We present quantitative analysis of the accuracy and efficiency of the radiation therapy tumor tracking system. EM tracking shows promise for IGS applications. Tracking the position of the instrument tip allows for minimally invasive intervention and alleviates the trauma associated with conventional surgery. Current clinical IGS implementations are limited to static targets: e.g. craniospinal, neurological, and orthopedic intervention. We present work on the development of a respiratory correlated image guided surgery: RCIGS) system. In the RCIGS system, target positions are modeled via respiratory correlated imaging: 4DCT) coupled with a breathing surrogate representative of the patient\u27s respiratory phase/amplitude. Once the target position is known with respect to the surrogate, intervention can be performed when the target is in the correct location. The RCIGS system consists of imaging techniques and custom developed software to give visual and auditory feedback to the surgeon indicating both the proper location and time for intervention. Presented here are the details of the IGS lung system along with quantitative results of the system accuracy in motion phantom, ex-vivo porcine lung, and human cadaver environments
Development and Validation of a Novel Skills Training Model for PCNL, an ESUT project
Background and aim: The aim of this study is to validate a totally non biologic training model that combines the use of ultrasound and X ray to train Urologists and Residents in Urology in PerCutaneous NephroLithotripsy (PCNL). Methods: The training pathway was divided into three modules: Module 1, related to the acquisition of basic UltraSound (US) skill on the kidney; Module 2, consisting of correct Nephrostomy placement; and Module 3, in which a complete PCNL was performed on the model. Trainees practiced on the model first on Module 1, than in 2 and in 3. The pathway was repeated at least three times. Afterward, they rated the performance of the model and the improvement gained using a global rating score questionnaire. Results: A total of 150 Urologists took part in this study. Questionnaire outcomes on this training model showed a mean 4.21 (range 1-5) of positive outcome overall. Individual constructive validity showed statistical significance between the first and the last time that trainees practiced on the PCNL model among the three different modules. Statistical significance was also found between residents, fellows and experts scores. Trainees increased their skills during the training modules. Conclusion: This PCNL training model allows for the acquisition of technical knowledge and skills as US basic skill, Nephrostomy placement and entire PCNL procedure. Its structured use could allow a better and safer training pathway to increase the skill in performing a PCNL
Navigation system based in motion tracking sensor for percutaneous renal access
Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal
diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these
procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and
reach the anatomical target.
Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray
based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to
the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure
on both patients and physicians.
To solve the referred problems this thesis aims to develop a new hardware/software framework
to intuitively and safely guide the surgeon during PRA planning and puncturing.
In terms of surgical planning, a set of methodologies were developed to increase the certainty of
reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were
automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local
optimization problem using the minimum description length principle and image statistical
properties. A multi-volume Ray Cast method was then used to highlight each segmented volume.
Results show that it is possible to detect all abdominal structures surrounding the kidney, with the
ability to correctly estimate a virtual trajectory.
Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution
were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution
aids in establishing the desired puncture site and choosing the best virtual puncture trajectory.
However, this system required a line of sight to different optical markers placed at the needle base,
limiting the accuracy when tracking inside the human body. Results show that the needle tip can
deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex
registration procedure and initial setup is needed.
On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter
was inserted trans-urethrally towards the renal target. This catheter has a position and orientation
electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture
trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and
ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds).
Such results represent a puncture time improvement between 75% and 85% when comparing to
state of the art methods.
3D sound and vibrotactile feedback were also developed to provide additional information about
the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to
follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory,
being able to anticipate any movement even without looking to a monitor. Best results show that 3D
sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of
10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average
angulation error of 8.0º degrees.
Additionally to the EMT framework, three circular ultrasound transducers were built with a needle
working channel. One explored different manufacture fabrication setups in terms of the piezoelectric
materials, transducer construction, single vs. multi array configurations, backing and matching
material design. The A-scan signals retrieved from each transducer were filtered and processed to
automatically detect reflected echoes and to alert the surgeon when undesirable anatomical
structures are in between the puncture path. The transducers were mapped in a water tank and
tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area
oscillates around the ceramics radius and it was possible to automatically detect echo signals in
phantoms with length higher than 80 mm.
Hereupon, it is expected that the introduction of the proposed system on the PRA procedure,
will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing
surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the
developed framework has the potential to make the PRA free of radiation for both patient and surgeon
and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e
diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e
desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente
relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico.
Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte
das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas
representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto
na diminuição da dose exposta aos pacientes e cirurgiões.
De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento
de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião
durante o planeamento e punção do ARP.
Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a
aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais
relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através
de um problema de optimização global com base no princípio de “minimum description length” e
propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções
de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que
é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para
estimar corretamente uma trajetória virtual.
No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção
de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução
baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor
trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes
marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada
no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida
que vai sendo inserida, com erros superiores a 3 mm.
Por outro lado, foi desenvolvida e testada uma solução com base em sensores
eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor
semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os
sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para
puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente
(variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo
de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais.
Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer
informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o
cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz
de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração
podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação
de 10.4º e 8.0 graus, respetivamente.
Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de
ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes
configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou
singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor
foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim,
alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os
transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados
mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os
ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm.
Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá
conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e
reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este
sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos
especializados.The present work was only possible thanks to the support by the Portuguese Science and
Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by
FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa
COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN
Virtual Reality Simulator for Training in Myringotomy with Tube Placement
Myringotomy refers to a surgical incision in the eardrum, and it is often followed by ventilation tube placement to treat middle-ear infections. The procedure is difficult to learn; hence, the objectives of this work were to develop a virtual-reality training simulator, assess its face and content validity, and implement quantitative performance metrics and assess construct validity.
A commercial digital gaming engine (Unity3D) was used to implement the simulator with support for 3D visualization of digital ear models and support for major surgical tasks. A haptic arm co-located with the stereo scene was used to manipulate virtual surgical tools and to provide force feedback.
A questionnaire was developed with 14 face validity questions focusing on realism and 6 content validity questions focusing on training potential. Twelve participants from the Department of Otolaryngology were recruited for the study. Responses to 12 of the 14 face validity questions were positive. One concern was with contact modeling related to tube insertion into the eardrum, and the second was with movement of the blade and forceps. The former could be resolved by using a higher resolution digital model for the eardrum to improve contact localization. The latter could be resolved by using a higher fidelity haptic device. With regard to content validity, 64% of the responses were positive, 21% were neutral, and 15% were negative.
In the final phase of this work, automated performance metrics were programmed and a construct validity study was conducted with 11 participants: 4 senior Otolaryngology consultants and 7 junior Otolaryngology residents. Each participant performed 10 procedures on the simulator and metrics were automatically collected. Senior Otolaryngologists took significantly less time to completion compared to junior residents. Junior residents had 2.8 times more errors as compared to experienced surgeons. The senior surgeons also had significantly longer incision lengths, more accurate incision angles, and lower magnification keeping both the umbo and annulus in view. All metrics were able to discriminate senior Otolaryngologists from junior residents with a significance of p \u3c 0.002.
The simulator has sufficient realism, training potential and performance discrimination ability to warrant a more resource intensive skills transference study
Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
Objective Structured Clinical Evaluation for Supraclavicular and Femoral Nerve Blocks Utilizing Ultrasound Guidance for Student Nurse Anesthetists
The purpose of this DNP project was to create a tool to teach and evaluate student registered nurse anesthetists (SRNAs) at TheUniversity of Southern Mississippi (USM) on supraclavicular and femoral nerve blocks to promote safety in the clinical setting. An objective structured clinical evaluation (OSCE) for ultrasound-guided supraclavicular and femoral nerve blocks was created as the basis for this project. Best-practice techniques as identified by the AANA’s 14thstandard of nurse anesthesia practice were researched and used in the development of the OSCE to promote a culture of safety for SRNAs and patients (AANA, 2019).
A culture of safety was promoted by creating an OSCE that can be used to test and evaluate SRNAs before they enter into a clinical setting. This will allow SRNAs to become more comfortable with performing these nerve blocks by practicing the process of the blocks as well as the techniques used to perform the blocks. Becoming more comfortable with these types of blocks will translate into increased safety for patients receiving these blocks from future USM SRNAs.
Step-by-step guides for the nerve blocks were developed. A survey was sent out to current second-year SRNAs and nurse anesthesia faculty at USM. The OSCE was edited based on their feedback after participating in the OSCE and completing the survey. Feedback from the survey was positive and minimal changes were made to the OSCE. All participants agreed that the OSCE was beneficial in presenting clear and exact instructions on the execution of supraclavicular and femoral nerve blocks and that it included all of the necessary information to help SRNAs be successful in the clinical arena
- …