1,128 research outputs found

    Hybrid Simulation and Planning Platform for Cryosurgery with Microsoft HoloLens

    Get PDF
    Cryosurgery is a technique of growing popularity involving tissue ablation under controlled freezing. Technological advancement of devices along with surgical technique improvements have turned cryosurgery from an experimental to an established option for treating several diseases. However, cryosurgery is still limited by inaccurate planning based primarily on 2D visualization of the patient's preoperative images. Several works have been aimed at modelling cryoablation through heat transfer simulations; however, most software applications do not meet some key requirements for clinical routine use, such as high computational speed and user-friendliness. This work aims to develop an intuitive platform for anatomical understanding and pre-operative planning by integrating the information content of radiological images and cryoprobe specifications either in a 3D virtual environment (desktop application) or in a hybrid simulator, which exploits the potential of the 3D printing and augmented reality functionalities of Microsoft HoloLens. The proposed platform was preliminarily validated for the retrospective planning/simulation of two surgical cases. Results suggest that the platform is easy and quick to learn and could be used in clinical practice to improve anatomical understanding, to make surgical planning easier than the traditional method, and to strengthen the memorization of surgical planning

    Development and Validation of a Hybrid Virtual/Physical Nuss Procedure Surgical Trainer

    Get PDF
    With continuous advancements and adoption of minimally invasive surgery, proficiency with nontrivial surgical skills involved is becoming a greater concern. Consequently, the use of surgical simulation has been increasingly embraced by many for training and skill transfer purposes. Some systems utilize haptic feedback within a high-fidelity anatomically-correct virtual environment whereas others use manikins, synthetic components, or box trainers to mimic primary components of a corresponding procedure. Surgical simulation development for some minimally invasive procedures is still, however, suboptimal or otherwise embryonic. This is true for the Nuss procedure, which is a minimally invasive surgery for correcting pectus excavatum (PE) – a congenital chest wall deformity. This work aims to address this gap by exploring the challenges of developing both a purely virtual and a purely physical simulation platform of the Nuss procedure and their implications in a training context. This work then describes the development of a hybrid mixed-reality system that integrates virtual and physical constituents as well as an augmentation of the haptic interface, to carry out a reproduction of the primary steps of the Nuss procedure and satisfy clinically relevant prerequisites for its training platform. Furthermore, this work carries out a user study to investigate the system’s face, content, and construct validity to establish its faithfulness as a training platform

    Assessment of a novel patient-specific 3D printed multi-material simulator for endoscopic sinus surgery

    Get PDF
    Background: Three-dimensional (3D) printing is an emerging tool in the creation of anatomical models for surgical training. Its use in endoscopic sinus surgery (ESS) has been limited because of the difficulty in replicating the anatomical details. Aim: To describe the development of a patient-specific 3D printed multi-material simulator for use in ESS, and to validate it as a training tool among a group of residents and experts in ear-nose-throat (ENT) surgery. Methods: Advanced material jetting 3D printing technology was used to produce both soft tissues and bony structures of the simulator to increase anatomical realism and tactile feedback of the model. A total of 3 ENT residents and 9 ENT specialists were recruited to perform both non-destructive tasks and ESS steps on the model. The anatomical fidelity and the usefulness of the simulator in ESS training were evaluated through specific questionnaires. Results: The tasks were accomplished by 100% of participants and the survey showed overall high scores both for anatomy fidelity and usefulness in training. Dacryocystorhinostomy, medial antrostomy, and turbinectomy were rated as accurately replicable on the simulator by 75% of participants. Positive scores were obtained also for ethmoidectomy and DRAF procedures, while the replication of sphenoidotomy received neutral ratings by half of the participants. Conclusion: This study demonstrates that a 3D printed multi-material model of the sino-nasal anatomy can be generated with a high level of anatomical accuracy and haptic response. This technology has the potential to be useful in surgical training as an alternative or complementary tool to cadaveric dissection

    Radiological Society of North America (RSNA) 3D printing Special Interest Group (SIG): Guidelines for medical 3D printing and appropriateness for clinical scenarios

    Get PDF
    Este número da revista Cadernos de Estudos Sociais estava em organização quando fomos colhidos pela morte do sociólogo Ernesto Laclau. Seu falecimento em 13 de abril de 2014 surpreendeu a todos, e particularmente ao editor Joanildo Burity, que foi seu orientando de doutorado na University of Essex, Inglaterra, e que recentemente o trouxe à Fundação Joaquim Nabuco para uma palestra, permitindo que muitos pudessem dialogar com um dos grandes intelectuais latinoamericanos contemporâneos. Assim, buscamos fazer uma homenagem ao sociólogo argentino publicando uma entrevista inédita concedida durante a sua passagem pelo Recife, em 2013, encerrando essa revista com uma sessão especial sobre a sua trajetória

    Evaluating Human Performance for Image-Guided Surgical Tasks

    Get PDF
    The following work focuses on the objective evaluation of human performance for two different interventional tasks; targeted prostate biopsy tasks using a tracked biopsy device, and external ventricular drain placement tasks using a mobile-based augmented reality device for visualization and guidance. In both tasks, a human performance methodology was utilized which respects the trade-off between speed and accuracy for users conducting a series of targeting tasks using each device. This work outlines the development and application of performance evaluation methods using these devices, as well as details regarding the implementation of the mobile AR application. It was determined that the Fitts’ Law methodology can be applied for evaluation of tasks performed in each surgical scenario, and was sensitive to differentiate performance across a range which spanned experienced and novice users. This methodology is valuable for future development of training modules for these and other medical devices, and can provide details about the underlying characteristics of the devices, and how they can be optimized with respect to human performance

    A deep learning framework for real-time 3D model registration in robot-assisted laparoscopic surgery

    Get PDF
    Introduction The current study presents a deep learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic robot-assisted procedures. Methods This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation. Results The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure. Discussion Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of deep learning and augmented reality to generalise the automatic registration process

    Artificial Intelligence and Machine Learning in Prostate Cancer Patient Management-Current Trends and Future Perspectives

    Get PDF
    Artificial intelligence (AI) is the field of computer science that aims to build smart devices performing tasks that currently require human intelligence. Through machine learning (ML), the deep learning (DL) model is teaching computers to learn by example, something that human beings are doing naturally. AI is revolutionizing healthcare. Digital pathology is becoming highly assisted by AI to help researchers in analyzing larger data sets and providing faster and more accurate diagnoses of prostate cancer lesions. When applied to diagnostic imaging, AI has shown excellent accuracy in the detection of prostate lesions as well as in the prediction of patient outcomes in terms of survival and treatment response. The enormous quantity of data coming from the prostate tumor genome requires fast, reliable and accurate computing power provided by machine learning algorithms. Radiotherapy is an essential part of the treatment of prostate cancer and it is often difficult to predict its toxicity for the patients. Artificial intelligence could have a future potential role in predicting how a patient will react to the therapy side effects. These technologies could provide doctors with better insights on how to plan radiotherapy treatment. The extension of the capabilities of surgical robots for more autonomous tasks will allow them to use information from the surgical field, recognize issues and implement the proper actions without the need for human intervention
    • …
    corecore