3,457 research outputs found

    How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens

    Get PDF
    Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator

    How to Build a Patient-Specific Hybrid Simulator for Orthopaedic Open Surgery: Benefits and Limits of Mixed-Reality Using the Microsoft HoloLens

    Get PDF
    Orthopaedic simulators are popular in innovative surgical training programs, where trainees gain procedural experience in a safe and controlled environment. Recent studies suggest that an ideal simulator should combine haptic, visual, and audio technology to create an immersive training environment. This article explores the potentialities of mixed-reality using the HoloLens to develop a hybrid training system for orthopaedic open surgery. Hip arthroplasty, one of the most common orthopaedic procedures, was chosen as a benchmark to evaluate the proposed system. Patient-specific anatomical 3D models were extracted from a patient computed tomography to implement the virtual content and to fabricate the physical components of the simulator. Rapid prototyping was used to create synthetic bones. The Vuforia SDK was utilized to register virtual and physical contents. The Unity3D game engine was employed to develop the software allowing interactions with the virtual content using head movements, gestures, and voice commands. Quantitative tests were performed to estimate the accuracy of the system by evaluating the perceived position of augmented reality targets. Mean and maximum errors matched the requirements of the target application. Qualitative tests were carried out to evaluate workload and usability of the HoloLens for our orthopaedic simulator, considering visual and audio perception and interaction and ergonomics issues. The perceived overall workload was low, and the self-assessed performance was considered satisfactory. Visual and audio perception and gesture and voice interactions obtained a positive feedback. Postural discomfort and visual fatigue obtained a nonnegative evaluation for a simulation session of 40 minutes. These results encourage using mixed-reality to implement a hybrid simulator for orthopaedic open surgery. An optimal design of the simulation tasks and equipment setup is required to minimize the user discomfort. Future works will include Face Validity, Content Validity, and Construct Validity to complete the assessment of the hip arthroplasty simulator

    Navigation system based in motion tracking sensor for percutaneous renal access

    Get PDF
    Tese de Doutoramento em Engenharia BiomédicaMinimally-invasive kidney interventions are daily performed to diagnose and treat several renal diseases. Percutaneous renal access (PRA) is an essential but challenging stage for most of these procedures, since its outcome is directly linked to the physician’s ability to precisely visualize and reach the anatomical target. Nowadays, PRA is always guided with medical imaging assistance, most frequently using X-ray based imaging (e.g. fluoroscopy). Thus, radiation on the surgical theater represents a major risk to the medical team, where its exclusion from PRA has a direct impact diminishing the dose exposure on both patients and physicians. To solve the referred problems this thesis aims to develop a new hardware/software framework to intuitively and safely guide the surgeon during PRA planning and puncturing. In terms of surgical planning, a set of methodologies were developed to increase the certainty of reaching a specific target inside the kidney. The most relevant abdominal structures for PRA were automatically clustered into different 3D volumes. For that, primitive volumes were merged as a local optimization problem using the minimum description length principle and image statistical properties. A multi-volume Ray Cast method was then used to highlight each segmented volume. Results show that it is possible to detect all abdominal structures surrounding the kidney, with the ability to correctly estimate a virtual trajectory. Concerning the percutaneous puncturing stage, either an electromagnetic or optical solution were developed and tested in multiple in vitro, in vivo and ex vivo trials. The optical tracking solution aids in establishing the desired puncture site and choosing the best virtual puncture trajectory. However, this system required a line of sight to different optical markers placed at the needle base, limiting the accuracy when tracking inside the human body. Results show that the needle tip can deflect from its initial straight line trajectory with an error higher than 3 mm. Moreover, a complex registration procedure and initial setup is needed. On the other hand, a real-time electromagnetic tracking was developed. Hereto, a catheter was inserted trans-urethrally towards the renal target. This catheter has a position and orientation electromagnetic sensor on its tip that function as a real-time target locator. Then, a needle integrating a similar sensor is used. From the data provided by both sensors, one computes a virtual puncture trajectory, which is displayed in a 3D visualization software. In vivo tests showed a median renal and ureteral puncture times of 19 and 51 seconds, respectively (range 14 to 45 and 45 to 67 seconds). Such results represent a puncture time improvement between 75% and 85% when comparing to state of the art methods. 3D sound and vibrotactile feedback were also developed to provide additional information about the needle orientation. By using these kind of feedback, it was verified that the surgeon tends to follow a virtual puncture trajectory with a reduced amount of deviations from the ideal trajectory, being able to anticipate any movement even without looking to a monitor. Best results show that 3D sound sources were correctly identified 79.2 ± 8.1% of times with an average angulation error of 10.4º degrees. Vibration sources were accurately identified 91.1 ± 3.6% of times with an average angulation error of 8.0º degrees. Additionally to the EMT framework, three circular ultrasound transducers were built with a needle working channel. One explored different manufacture fabrication setups in terms of the piezoelectric materials, transducer construction, single vs. multi array configurations, backing and matching material design. The A-scan signals retrieved from each transducer were filtered and processed to automatically detect reflected echoes and to alert the surgeon when undesirable anatomical structures are in between the puncture path. The transducers were mapped in a water tank and tested in a study involving 45 phantoms. Results showed that the beam cross-sectional area oscillates around the ceramics radius and it was possible to automatically detect echo signals in phantoms with length higher than 80 mm. Hereupon, it is expected that the introduction of the proposed system on the PRA procedure, will allow to guide the surgeon through the optimal path towards the precise kidney target, increasing surgeon’s confidence and reducing complications (e.g. organ perforation) during PRA. Moreover, the developed framework has the potential to make the PRA free of radiation for both patient and surgeon and to broad the use of PRA to less specialized surgeons.Intervenções renais minimamente invasivas são realizadas diariamente para o tratamento e diagnóstico de várias doenças renais. O acesso renal percutâneo (ARP) é uma etapa essencial e desafiante na maior parte destes procedimentos. O seu resultado encontra-se diretamente relacionado com a capacidade do cirurgião visualizar e atingir com precisão o alvo anatómico. Hoje em dia, o ARP é sempre guiado com recurso a sistemas imagiológicos, na maior parte das vezes baseados em raios-X (p.e. a fluoroscopia). A radiação destes sistemas nas salas cirúrgicas representa um grande risco para a equipa médica, aonde a sua remoção levará a um impacto direto na diminuição da dose exposta aos pacientes e cirurgiões. De modo a resolver os problemas existentes, esta tese tem como objetivo o desenvolvimento de uma framework de hardware/software que permita, de forma intuitiva e segura, guiar o cirurgião durante o planeamento e punção do ARP. Em termos de planeamento, foi desenvolvido um conjunto de metodologias de modo a aumentar a eficácia com que o alvo anatómico é alcançado. As estruturas abdominais mais relevantes para o procedimento de ARP, foram automaticamente agrupadas em volumes 3D, através de um problema de optimização global com base no princípio de “minimum description length” e propriedades estatísticas da imagem. Por fim, um procedimento de Ray Cast, com múltiplas funções de transferência, foi utilizado para enfatizar as estruturas segmentadas. Os resultados mostram que é possível detetar todas as estruturas abdominais envolventes ao rim, com a capacidade para estimar corretamente uma trajetória virtual. No que diz respeito à fase de punção percutânea, foram testadas duas soluções de deteção de movimento (ótica e eletromagnética) em múltiplos ensaios in vitro, in vivo e ex vivo. A solução baseada em sensores óticos ajudou no cálculo do melhor ponto de punção e na definição da melhor trajetória a seguir. Contudo, este sistema necessita de uma linha de visão com diferentes marcadores óticos acoplados à base da agulha, limitando a precisão com que a agulha é detetada no interior do corpo humano. Os resultados indicam que a agulha pode sofrer deflexões à medida que vai sendo inserida, com erros superiores a 3 mm. Por outro lado, foi desenvolvida e testada uma solução com base em sensores eletromagnéticos. Para tal, um cateter que integra um sensor de posição e orientação na sua ponta, foi colocado por via trans-uretral junto do alvo renal. De seguida, uma agulha, integrando um sensor semelhante, é utilizada para a punção percutânea. A partir da diferença espacial de ambos os sensores, é possível gerar uma trajetória de punção virtual. A mediana do tempo necessário para puncionar o rim e ureter, segundo esta trajetória, foi de 19 e 51 segundos, respetivamente (variações de 14 a 45 e 45 a 67 segundos). Estes resultados representam uma melhoria do tempo de punção entre 75% e 85%, quando comparados com o estado da arte dos métodos atuais. Além do feedback visual, som 3D e feedback vibratório foram explorados de modo a fornecer informações complementares da posição da agulha. Verificou-se que com este tipo de feedback, o cirurgião tende a seguir uma trajetória de punção com desvios mínimos, sendo igualmente capaz de antecipar qualquer movimento, mesmo sem olhar para o monitor. Fontes de som e vibração podem ser corretamente detetadas em 79,2 ± 8,1% e 91,1 ± 3,6%, com erros médios de angulação de 10.4º e 8.0 graus, respetivamente. Adicionalmente ao sistema de navegação, foram também produzidos três transdutores de ultrassom circulares com um canal de trabalho para a agulha. Para tal, foram exploradas diferentes configurações de fabricação em termos de materiais piezoelétricos, transdutores multi-array ou singulares e espessura/material de layers de suporte. Os sinais originados em cada transdutor foram filtrados e processados de modo a detetar de forma automática os ecos refletidos, e assim, alertar o cirurgião quando existem variações anatómicas ao longo do caminho de punção. Os transdutores foram mapeados num tanque de água e testados em 45 phantoms. Os resultados mostraram que o feixe de área em corte transversal oscila em torno do raio de cerâmica, e que os ecos refletidos são detetados em phantoms com comprimentos superiores a 80 mm. Desta forma, é expectável que a introdução deste novo sistema a nível do ARP permitirá conduzir o cirurgião ao longo do caminho de punção ideal, aumentado a confiança do cirurgião e reduzindo possíveis complicações (p.e. a perfuração dos órgãos). Além disso, de realçar que este sistema apresenta o potencial de tornar o ARP livre de radiação e alarga-lo a cirurgiões menos especializados.The present work was only possible thanks to the support by the Portuguese Science and Technology Foundation through the PhD grant with reference SFRH/BD/74276/2010 funded by FCT/MEC (PIDDAC) and by Fundo Europeu de Desenvolvimento Regional (FEDER), Programa COMPETE - Programa Operacional Factores de Competitividade (POFC) do QREN

    A mixed reality framework for surgical navigation: approach and preliminary results

    Get PDF
    The overarching purpose of this research is to understand whether Mixed Reality can enhance a surgeon’s manipulations skills during minimally invasive procedures. Minimally-invasive surgery (MIS) utilizes small cuts in the skin - or sometimes natural orifices - to deploy instruments inside a patient’s body, while a live video feed of the surgical site is provided by an endoscopic camera and displayed on a screen. MIS is associated with many benefits: small scars, less pain and shorter hospitalization time as compared to traditional open surgery. However, these benefits come at a cost: because surgeons have to work by looking at a monitor, and not down on their own hands, MIS disrupts their eye-hand coordination and makes even simple surgical maneuvers challenging to perform. In this study, we wish to use Mixed Reality technology to superimpose anatomical models over the surgical site and explore if it can be used to mitigate this problem

    InterNAV3D: A Navigation Tool for Robot-Assisted Needle-Based Intervention for the Lung

    Get PDF
    Lung cancer is one of the leading causes of cancer deaths in North America. There are recent advances in cancer treatment techniques that can treat cancerous tumors, but require a real-time imaging modality to provide intraoperative assistive feedback. Ultrasound (US) imaging is one such modality. However, while its application to the lungs has been limited because of the deterioration of US image quality (due to the presence of air in the lungs); recent work has shown that appropriate lung deflation can help to improve the quality sufficiently to enable intraoperative, US-guided robotics-assisted techniques to be used. The work described in this thesis focuses on this approach. The thesis describes a project undertaken at Canadian Surgical Technologies and Advanced Robotics (CSTAR) that utilizes the image processing techniques to further enhance US images and implements an advanced 3D virtual visualization software approach. The application considered is that for minimally invasive lung cancer treatment using procedures such as brachytherapy and microwave ablation while taking advantage of the accuracy and teleoperation capabilities of surgical robots, to gain higher dexterity and precise control over the therapy tools (needles and probes). A number of modules and widgets are developed and explained which improve the visibility of the physical features of interest in the treatment and help the clinician to have more reliable and accurate control of the treatment. Finally the developed tools are validated with extensive experimental evaluations and future developments are suggested to enhance the scope of the applications

    'Let it Grow'- Immersive installation in relation to culture expression and audiences' perceptual experience

    Get PDF
    Currently, immersive art installation has become one of the most rapidly growing segments of the immersive design industry. As a hybrid of art and technology to collectively disrupt the zone of single material expression, full-body, sensory immersion installations have emerged and given people more opportunities to experience different realities. As various immersive exhibitions emerged in the year 2019, it was evident to see that more pop-up exhibitions start to be generated by instant interaction and astonishing digital illusions. The lucrative market space and audiences' pursuits of novelty underlined by the overall development of this industry provoked the critical question that this thesis takes into consideration, that is, 'What is the intrinsic value behind immersive art?' In order to enhance the cultural perception of this project, it is significant to understand the relationship between audiences' cultural experiences and a range of design methods. Based on audiences' linear experience of this project, this study divides audiences' experience into three stages - 'before exploring', 'exploring', and 'after exploring'. In the first stage, this study investigates the realm of psychology to gain an understanding of how the inherent value of artworks promotes people's intrinsic motivation for spontaneous immersion. In the second stage, this study conducts two representative case studies adopting several design factors to understand how the aesthetic distance between the artwork and audiences' knowledge affects audiences' perception of an unknown culture. The goal is to retrieve the optimal aesthetic balance as well as further develop the reflective design approaches. In the third stage, this study strategically carries out through practical design using design approaches to better understand participants' perceptual experience. To investigate how the perceptual process evolves, a practical design is conducted by creating a physically immersive installation based on a Finnish myth story called 'Revontulet'. Besides, a questionnaire is designed to further understand audiences' interests and willingness to participate in the installation, and the questionnaire aims at gathering audiences’ feelings, including different factors, to evaluate the design approaches and the design work

    Microstructural characterization of 3D printed cementitious materials

    Get PDF
    Three-dimensional concrete printing (3DCP) has progressed rapidly in recent years. With the aim to realize both buildings and civil works without using any molding, not only has the need for reliable mechanical properties of printed concrete grown, but also the need for more durable and environmentally friendly materials. As a consequence of super positioning cementitious layers, voids are created which can negatively affect durability. This paper presents the results of an experimental study on the relationship between 3DCP process parameters and the formed microstructure. The effect of two different process parameters (printing speed and inter-layer time) on the microstructure was established for fresh and hardened states, and the results were correlated with mechanical performance. In the case of a higher printing speed, a lower surface roughness was created due to the higher kinetic energy of the sand particles and the higher force applied. Microstructural investigations revealed that the amount of unhydrated cement particles was higher in the case of a lower inter-layer interval (i.e., 10 min). This phenomenon could be related to the higher water demand of the printed layer in order to rebuild the early Calcium-Silicate-Hydrate (CSH) bridges and the lower amount of water available for further hydration. The number of pores and the pore distribution were also more pronounced in the case of lower time intervals. Increasing the inter-layer time interval or the printing speed both lowered the mechanical performance of the printed specimens. This study emphasizes that individual process parameters will affect not only the structural behavior of the material, but they will also affect the durability and consequently the resistance against aggressive chemical substances

    Mixed Reality’s Ability To Craft And Establish An Experience Of Space

    Get PDF
    Mixed Reality, when integrated into architecture, will enable open spaces and the perception of the built environment to change rapidly with little physical fabrication. As architects, we design with a desired experience of space in mind and don’t typically design with a rapidly changing built environment to meet a fluctuating programmatic demand. Theater Program however, often requires such rapid changes to the perceived environment, that is the stage, and is an activator of social interaction based on a shared experience of the performances. What would be the architectural implications if we were to integrate mixed reality as a factor of the built environment? Is mixed reality technology even able to create an altered experience of space? To help answer this question the research conducted thorough investigation of phenomenological relations and studies and testing using the Microsoft HoloLens was conducted to simulate or verify the relations and studies. As a final output, Theater with Mixed Reality integrated into the design process as a key deciding design factor will be the main programmatic research and output of this project postulating both a built environment and flexible use space as possible means to redefine the architectural definition as we currently know as a Theatre
    corecore