560 research outputs found

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    Review on Augmented Reality in Oral and Cranio-Maxillofacial Surgery: Toward 'Surgery-Specific' Head-Up Displays

    Get PDF
    In recent years, there has been an increasing interest towards the augmented reality as applied to the surgical field. We conducted a systematic review of literature classifying the augmented reality applications in oral and cranio-maxillofacial surgery (OCMS) in order to pave the way to future solutions that may ease the adoption of AR guidance in surgical practice. Publications containing the terms 'augmented reality' AND 'maxillofacial surgery', and the terms 'augmented reality' AND 'oral surgery' were searched in the PubMed database. Through the selected studies, we performed a preliminary breakdown according to general aspects, such as surgical subspecialty, year of publication and country of research; then, a more specific breakdown was provided according to technical features of AR-based devices, such as virtual data source, visualization processing mode, tracking mode, registration technique and AR display type. The systematic search identified 30 eligible publications. Most studies (14) were in orthognatic surgery, the minority (2) concerned traumatology, while 6 studies were in oncology and 8 in general OCMS. In 8 of 30 studies the AR systems were based on a head-mounted approach using smart glasses or headsets. In most of these cases (7), a video-see-through mode was implemented, while only 1 study described an optical-see-through mode. In the remaining 22 studies, the AR content was displayed on 2D displays (10), full-parallax 3D displays (6) and projectors (5). In 1 case the AR display type is not specified. AR applications are of increasing interest and adoption in oral and cranio-maxillofacial surgery, however, the quality of the AR experience represents the key requisite for a successful result. Widespread use of AR systems in the operating room may be encouraged by the availability of 'surgery-specific' head-mounted devices that should guarantee the accuracy required for surgical tasks and the optimal ergonomics

    Optical See-Through Head-Mounted Displays With Short Focal Distance: Conditions for Mitigating Parallax-Related Registration Error

    Get PDF
    Optical see-through (OST) augmented reality head-mounted displays are quickly emerging as a key asset in several application fields but their ability to profitably assist high precision activities in the peripersonal space is still sub-optimal due to the calibration procedure required to properly model the user's viewpoint through the see-through display. In this work, we demonstrate the beneficial impact, on the parallax-related AR misregistration, of the use of optical see-through displays whose optical engines collimate the computer-generated image at a depth close to the fixation point of the user in the peripersonal space. To estimate the projection parameters of the OST display for a generic viewpoint position, our strategy relies on a dedicated parameterization of the virtual rendering camera based on a calibration routine that exploits photogrammetry techniques. We model the registration error due to the viewpoint shift and we validate it on an OST display with short focal distance. The results of the tests demonstrate that with our strategy the parallax-related registration error is submillimetric provided that the scene under observation stays within a suitable view volume that falls in a ±10 cm depth range around the focal plane of the display. This finding will pave the way to the development of new multi-focal models of OST HMDs specifically conceived to aid high-precision manual tasks in the peripersonal space

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI

    Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI

    Get PDF
    Background: Augmented Reality (AR) represents an innovative technology to improve data visualization and strengthen the human perception. Among Human–Machine Interaction (HMI), medicine can benefit most from the adoption of these digital technologies. In this perspective, the literature on orthopedic surgery techniques based on AR was evaluated, focusing on identifying the limitations and challenges of AR-based healthcare applications, to support the research and the development of further studies. Methods: Studies published from January 2018 to December 2021 were analyzed after a comprehensive search on PubMed, Google Scholar, Scopus, IEEE Xplore, Science Direct, and Wiley Online Library databases. In order to improve the review reporting, the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines were used. Results: Authors selected sixty-two articles meeting the inclusion criteria, which were categorized according to the purpose of the study (intraoperative, training, rehabilitation) and according to the surgical procedure used. Conclusions: AR has the potential to improve orthopedic training and practice by providing an increasingly human-centered clinical approach. Further research can be addressed by this review to cover problems related to hardware limitations, lack of accurate registration and tracking systems, and absence of security protocols

    Augmented Reality in Ventriculostomy

    Get PDF
    Freehand ventriculostomy is one of the most common neurological procedures performed when the cerebrospinal uid increases in the ventricular system. This procedure is most often performed in the emergency room or intensive care unit and thus without a navigation system to help surgeons locate the ventricles. Surgeons instead use anatomical landmarks on the face and skull to determine the best location of the burr hole and trajectory for moving catheter through the brain to the ventricles to drain excess cerebrospinal uid (CSF) and decrease intracranial pressure (ICP). Freehand ventriculostomy has an associated catheter misplacement rate of over 30% which can lead to a number of complications including mortality and morbidity. In this dissertation, we propose an augmented-reality pipeline for ventriculostomy using an optical-see-through head-mounted device, the Microsoft HoloLens. Our system, projects a 3D constructed model of the patient's skull and ventricles directly onto the patient's head to guide the surgeon to locate a target on the ventricle. As part of this pipeline, we implemented an API to send real-time tracking information from the optical tracker to the the HoloLens, provided a manual gesture-based registration method, as well as a colored-based depth visualization to help users understand the spatial relationship between the patient's ventricular anatomy and surgical tool. In a study with 15 subjects, we found that the proposed gesture-based registration has an accuracy of 10:75 millimeters and target hitting accuracy of 12:28 millimeters. In terms of usability, our developed system received a score of 74.5 on the System usability scale (SUS), indicating that the system is easily usable. Our preliminary results suggest that augmented-reality systems can be helpful for neuronavigation procedures that require target localization
    • …
    corecore