8,090 research outputs found

    Computer assisted surgical anatomy mapping : applications in surgical anatomy research, tailor-made surgery and presonalized teaching

    Get PDF
    This thesis presents a novel anatomy mapping tool named Computer Assisted Surgical Anatomy Mapping (CASAM). It allows researchers to map complex anatomy of multiple specimens and compare their location and course. Renditions such as safe zones or danger zones can be visualized, summarizing complex anatomy into comprehensive images, making anatomy research more Accessible. The web-based version of CASAM is used for Personalized Teaching, giving individual feedback on incision lines drawn during surgical courses. It also provides a platform for future International Collaboration between anatomical wetlabs, standardizing anatomy research. The current algorithms used in CASAM are verified and also ready to map complex 3D Anatomy. Future iterations of CASAM will extend its use to everyday surgical practice and tailor-made surgery

    Design considerations for delivering e-learning to surgical trainees

    Get PDF
    Copyright © 2011, IGI Global. Distributed with permission.Challenges remain in leveraging e-health technologies for continuous medical education/professional development. This study examines the interface design and learning process features related to the use of multimedia in providing effective support for the knowledge and practice of surgical skills. Twenty-one surgical trainees evaluated surgical content on a CD-ROM format based on 14 interface design and 11 learning process features using a questionnaire adapted from an established tool created to assess educational multimedia. Significant Spearman’s correlations were found for seven of the 14 interface design features – ‘Navigation’, ‘Learning demands’, ‘Videos’, ‘Media integration’, ‘Level of material’, ‘Information presentation’ and ‘Overall functionality’, explaining ratings of the learning process. The interplay of interface design and learning process features of educational multimedia highlight key design considerations in e-learning. An understanding of these features is relevant to the delivery of surgical training, reflecting the current state of the art in transferring static CD-ROM content to the dynamic web or creating CD/web hybrid models of education

    Marker-free surgical navigation of rod bending using a stereo neural network and augmented reality in spinal fusion

    Full text link
    The instrumentation of spinal fusion surgeries includes pedicle screw placement and rod implantation. While several surgical navigation approaches have been proposed for pedicle screw placement, less attention has been devoted towards the guidance of patient-specific adaptation of the rod implant. We propose a marker-free and intuitive Augmented Reality (AR) approach to navigate the bending process required for rod implantation. A stereo neural network is trained from the stereo video streams of the Microsoft HoloLens in an end-to-end fashion to determine the location of corresponding pedicle screw heads. From the digitized screw head positions, the optimal rod shape is calculated, translated into a set of bending parameters, and used for guiding the surgeon with a novel navigation approach. In the AR-based navigation, the surgeon is guided step-by-step in the use of the surgical tools to achieve an optimal result. We have evaluated the performance of our method on human cadavers against two benchmark methods, namely conventional freehand bending and marker-based bending navigation in terms of bending time and rebending maneuvers. We achieved an average bending time of 231s with 0.6 rebending maneuvers per rod compared to 476s (3.5 rebendings) and 348s (1.1 rebendings) obtained by our freehand and marker-based benchmarks, respectively

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements

    Mobile Wound Assessment and 3D Modeling from a Single Image

    Get PDF
    The prevalence of camera-enabled mobile phones have made mobile wound assessment a viable treatment option for millions of previously difficult to reach patients. We have designed a complete mobile wound assessment platform to ameliorate the many challenges related to chronic wound care. Chronic wounds and infections are the most severe, costly and fatal types of wounds, placing them at the center of mobile wound assessment. Wound physicians assess thousands of single-view wound images from all over the world, and it may be difficult to determine the location of the wound on the body, for example, if the wound is taken at close range. In our solution, end-users capture an image of the wound by taking a picture with their mobile camera. The wound image is segmented and classified using modern convolution neural networks, and is stored securely in the cloud for remote tracking. We use an interactive semi-automated approach to allow users to specify the location of the wound on the body. To accomplish this we have created, to the best our knowledge, the first 3D human surface anatomy labeling system, based off the current NYU and Anatomy Mapper labeling systems. To interactively view wounds in 3D, we have presented an efficient projective texture mapping algorithm for texturing wounds onto a 3D human anatomy model. In so doing, we have demonstrated an approach to 3D wound reconstruction that works even for a single wound image

    Surgical GPS Proof of Concept for Scoliosis Surgery

    Get PDF
    Scoliotic deformities may be addressed with either anterior or posterior approaches for scoliosis correction procedures. While typically quite invasive, the impact of these operations may be reduced through the use of computer-assisted surgery. A combination of physician-designated anatomical landmarks and surgical ontologies allows for real-time intraoperative guidance during computer-assisted surgical interventions. Predetermined landmarks are labeled on an identical patient model, which seeks to encompass vertebrae, intervertebral disks, ligaments, and other soft tissues. The inclusion of this anatomy permits the consideration of hypothetical forces that are previously not well characterized in a patient-specific manner. Updated ontologies then suggest procedural directions throughout the surgical corridor, observing the positioning of both the physician and the anatomical landmarks of interest at the present moment. Merging patient-specific models, physician-designated landmarks, and ontologies to produce real-time recommendations magnifies the successful outcome of scoliosis correction through enhanced pre-surgical planning, reduced invasiveness, and shorted recovery time

    Performance of image guided navigation in laparoscopic liver surgery – A systematic review

    Get PDF
    Background: Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. Methods: Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. Results: Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8–15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. Conclusions: Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard

    Image-Fusion for Biopsy, Intervention, and Surgical Navigation in Urology

    Get PDF

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    SmartSIM - a virtual reality simulator for laparoscopy training using a generic physics engine

    Get PDF
    International audienceVirtual reality (VR) training simulators have started playing a vital role in enhancing surgical skills, such as hand–eye coordination in laparoscopy, and practicing surgical scenarios that cannot be easily created using physical models. We describe a new VR simulator for basic training in lapa-roscopy, i.e. SmartSIM, which has been developed using a generic open‐source physics engine called the simulation open framework architecture (SOFA). This paper describes the systems perspective of SmartSIM including design details of both hardware and software components, while highlighting the critical design decisions. Some of the distinguishing features of SmartSIM include: (i) an easy‐to‐fabricate custom‐built hardware interface; (ii) use of a generic physics engine to facilitate wider accessibility of our work and flexibility in terms of using various graph-ical modelling algorithms and their implementations; and (iii) an intelligent and smart evaluation mechanism that facilitates unsupervised and independent learning
    • 

    corecore