17 research outputs found

    Towards System Agnostic Calibration of Optical See-Through Head-Mounted Displays for Augmented Reality

    Get PDF
    This dissertation examines the developments and progress of spatial calibration procedures for Optical See-Through (OST) Head-Mounted Display (HMD) devices for visual Augmented Reality (AR) applications. Rapid developments in commercial AR systems have created an explosion of OST device options for not only research and industrial purposes, but also the consumer market as well. This expansion in hardware availability is equally matched by a need for intuitive standardized calibration procedures that are not only easily completed by novice users, but which are also readily applicable across the largest range of hardware options. This demand for robust uniform calibration schemes is the driving motive behind the original contributions offered within this work. A review of prior surveys and canonical description for AR and OST display developments is provided before narrowing the contextual scope to the research questions evolving within the calibration domain. Both established and state of the art calibration techniques and their general implementations are explored, along with prior user study assessments and the prevailing evaluation metrics and practices employed within. The original contributions begin with a user study evaluation comparing and contrasting the accuracy and precision of an established manual calibration method against a state of the art semi-automatic technique. This is the first formal evaluation of any non-manual approach and provides insight into the current usability limitations of present techniques and the complexities of next generation methods yet to be solved. The second study investigates the viability of a user-centric approach to OST HMD calibration through novel adaptation of manual calibration to consumer level hardware. Additional contributions describe the development of a complete demonstration application incorporating user-centric methods, a novel strategy for visualizing both calibration results and registration error from the user’s perspective, as well as a robust intuitive presentation style for binocular manual calibration. The final study provides further investigation into the accuracy differences observed between user-centric and environment-centric methodologies. The dissertation concludes with a summarization of the contribution outcomes and their impact on existing AR systems and research endeavors, as well as a short look ahead into future extensions and paths that continued calibration research should explore

    Towards markerless orthopaedic navigation with intuitive Optical See-through Head-mounted displays

    Get PDF
    The potential of image-guided orthopaedic navigation to improve surgical outcomes has been well-recognised during the last two decades. According to the tracked pose of target bone, the anatomical information and preoperative plans are updated and displayed to surgeons, so that they can follow the guidance to reach the goal with higher accuracy, efficiency and reproducibility. Despite their success, current orthopaedic navigation systems have two main limitations: for target tracking, artificial markers have to be drilled into the bone and calibrated manually to the bone, which introduces the risk of additional harm to patients and increases operating complexity; for guidance visualisation, surgeons have to shift their attention from the patient to an external 2D monitor, which is disruptive and can be mentally stressful. Motivated by these limitations, this thesis explores the development of an intuitive, compact and reliable navigation system for orthopaedic surgery. To this end, conventional marker-based tracking is replaced by a novel markerless tracking algorithm, and the 2D display is replaced by a 3D holographic Optical see-through (OST) Head-mounted display (HMD) precisely calibrated to a user's perspective. Our markerless tracking, facilitated by a commercial RGBD camera, is achieved through deep learning-based bone segmentation followed by real-time pose registration. For robust segmentation, a new network is designed and efficiently augmented by a synthetic dataset. Our segmentation network outperforms the state-of-the-art regarding occlusion-robustness, device-agnostic behaviour, and target generalisability. For reliable pose registration, a novel Bounded Iterative Closest Point (BICP) workflow is proposed. The improved markerless tracking can achieve a clinically acceptable error of 0.95 deg and 2.17 mm according to a phantom test. OST displays allow ubiquitous enrichment of perceived real world with contextually blended virtual aids through semi-transparent glasses. They have been recognised as a suitable visual tool for surgical assistance, since they do not hinder the surgeon's natural eyesight and require no attention shift or perspective conversion. The OST calibration is crucial to ensure locational-coherent surgical guidance. Current calibration methods are either human error-prone or hardly applicable to commercial devices. To this end, we propose an offline camera-based calibration method that is highly accurate yet easy to implement in commercial products, and an online alignment-based refinement that is user-centric and robust against user error. The proposed methods are proven to be superior to other similar State-of- the-art (SOTA)s regarding calibration convenience and display accuracy. Motivated by the ambition to develop the world's first markerless OST navigation system, we integrated the developed markerless tracking and calibration scheme into a complete navigation workflow designed for femur drilling tasks during knee replacement surgery. We verify the usability of our designed OST system with an experienced orthopaedic surgeon by a cadaver study. Our test validates the potential of the proposed markerless navigation system for surgical assistance, although further improvement is required for clinical acceptance.Open Acces

    Augmented Reality Assistance for Surgical Interventions using Optical See-Through Head-Mounted Displays

    Get PDF
    Augmented Reality (AR) offers an interactive user experience via enhancing the real world environment with computer-generated visual cues and other perceptual information. It has been applied to different applications, e.g. manufacturing, entertainment and healthcare, through different AR media. An Optical See-Through Head-Mounted Display (OST-HMD) is a specialized hardware for AR, where the computer-generated graphics can be overlaid directly onto the user's normal vision via optical combiners. Using OST-HMD for surgical intervention has many potential perceptual advantages. As a novel concept, many technical and clinical challenges exist for OST-HMD-based AR to be clinically useful, which motivates the work presented in this thesis. From the technical aspects, we first investigate the display calibration of OST-HMD, which is an indispensable procedure to create accurate AR overlay. We propose various methods to reduce the user-related error, improve robustness of the calibration, and remodel the calibration as a 3D-3D registration problem. Secondly, we devise methods and develop hardware prototype to increase the user's visual acuity of both real and virtual content through OST-HMD, to aid them in tasks that require high visual acuity, e.g. dental procedures. Thirdly, we investigate the occlusion caused by the OST-HMD hardware, which limits the user's peripheral vision. We propose to use alternative indicators to remind the user of unattended environment motion. From the clinical perspective, we identified many clinical use cases where OST-HMD-based AR is potentially helpful, developed applications integrated with current clinical systems, and conducted proof-of-concept evaluations. We first present a "virtual monitor'' for image-guided surgery. It can replace real radiology monitors in the operating room with easier user control and more flexibility in positioning. We evaluated the "virtual monitor'' for simulated percutaneous spine procedures. Secondly, we developed ARssist, an application for the bedside assistant in robotic surgery. The assistant can see the robotic instruments and endoscope within the patient body with ARssist. We evaluated the efficiency, safety and ergonomics of the assistant during two typical tasks: instrument insertion and manipulation. The performance for inexperienced users is significantly improved with ARssist, and for experienced users, the system significantly enhanced their confidence level. Lastly, we developed ARAMIS, which utilizes real-time 3D reconstruction and visualization to aid the laparoscopic surgeon. It demonstrates the concept of "X-ray see-through'' surgery. Our preliminary evaluation validated the application via a peg transfer task, and also showed significant improvement in hand-eye coordination. Overall, we have demonstrated that OST-HMD based AR application provides ergonomic improvements, e.g. hand-eye coordination. In challenging situations or for novice users, the improvements in ergonomic factors lead to improvement in task performance. With continuous effort as a community, optical see-through augmented reality technology will be a useful interventional aid in the near future

    Calibration Methods for Head-Tracked 3D Displays

    Get PDF
    Head-tracked 3D displays can provide a compelling 3D effect, but even small inaccuracies in the calibration of the participant’s viewpoint to the display can disrupt the 3D illusion. We propose a novel interactive procedure for a participant to easily and accurately calibrate a head-tracked display by visually aligning patterns across a multi-screen display. Head-tracker measurements are then calibrated to these known viewpoints. We conducted a user study to evaluate the effectiveness of different visual patterns and different display shapes. We found that the easiest to align shape was the spherical display and the best calibration pattern was the combination of circles and lines. We performed a quantitative camera-based calibration of a cubic display and found visual calibration outperformed manual tuning and generated viewpoint calibrations accurate to within a degree. Our work removes the usual, burdensome step of manual calibration when using head-tracked displays and paves the way for wider adoption of this inexpensive and effective 3D display technology

    Advanced Calibration of Automotive Augmented Reality Head-Up Displays = Erweiterte Kalibrierung von Automotiven Augmented Reality-Head-Up-Displays

    Get PDF
    In dieser Arbeit werden fortschrittliche Kalibrierungsmethoden für Augmented-Reality-Head-up-Displays (AR-HUDs) in Kraftfahrzeugen vorgestellt, die auf parametrischen perspektivischen Projektionen und nichtparametrischen Verzerrungsmodellen basieren. Die AR-HUD-Kalibrierung ist wichtig, um virtuelle Objekte in relevanten Anwendungen wie z.B. Navigationssystemen oder Parkvorgängen korrekt zu platzieren. Obwohl es im Stand der Technik einige nützliche Ansätze für dieses Problem gibt, verfolgt diese Dissertation das Ziel, fortschrittlichere und dennoch weniger komplizierte Ansätze zu entwickeln. Als Voraussetzung für die Kalibrierung haben wir mehrere relevante Koordinatensysteme definiert, darunter die dreidimensionale (3D) Welt, den Ansichtspunkt-Raum, den HUD-Sichtfeld-Raum (HUD-FOV) und den zweidimensionalen (2D) virtuellen Bildraum. Wir beschreiben die Projektion der Bilder von einem AR-HUD-Projektor in Richtung der Augen des Fahrers als ein ansichtsabhängiges Lochkameramodell, das aus intrinsischen und extrinsischen Matrizen besteht. Unter dieser Annahme schätzen wir zunächst die intrinsische Matrix unter Verwendung der Grenzen des HUD-Sichtbereichs. Als nächstes kalibrieren wir die extrinsischen Matrizen an verschiedenen Blickpunkten innerhalb einer ausgewählten "Eyebox" unter Berücksichtigung der sich ändernden Augenpositionen des Fahrers. Die 3D-Positionen dieser Blickpunkte werden von einer Fahrerkamera verfolgt. Für jeden einzelnen Blickpunkt erhalten wir eine Gruppe von 2D-3D-Korrespondenzen zwischen einer Menge Punkten im virtuellen Bildraum und ihren übereinstimmenden Kontrollpunkten vor der Windschutzscheibe. Sobald diese Korrespondenzen verfügbar sind, berechnen wir die extrinsische Matrix am entsprechenden Betrachtungspunkt. Durch Vergleichen der neu projizierten und realen Pixelpositionen dieser virtuellen Punkte erhalten wir eine 2D-Verteilung von Bias-Vektoren, mit denen wir Warping-Karten rekonstruieren, welche die Informationen über die Bildverzerrung enthalten. Für die Vollständigkeit wiederholen wir die obigen extrinsischen Kalibrierungsverfahren an allen ausgewählten Betrachtungspunkten. Mit den kalibrierten extrinsischen Parametern stellen wir die Betrachtungspunkte wieder her im Weltkoordinatensystem. Da wir diese Punkte gleichzeitig im Raum der Fahrerkamera verfolgen, kalibrieren wir weiter die Transformation von der Fahrerkamera in den Weltraum unter Verwendung dieser 3D-3D-Korrespondenzen. Um mit nicht teilnehmenden Betrachtungspunkten innerhalb der Eyebox umzugehen, erhalten wir ihre extrinsischen Parameter und Warping-Karten durch nichtparametrische Interpolationen. Unsere Kombination aus parametrischen und nichtparametrischen Modellen übertrifft den Stand der Technik hinsichtlich der Zielkomplexität sowie Zeiteffizienz, während wir eine vergleichbare Kalibrierungsgenauigkeit beibehalten. Bei allen unseren Kalibrierungsschemen liegen die Projektionsfehler in der Auswertungsphase bei einer Entfernung von 7,5 Metern innerhalb weniger Millimeter, was einer Winkelgenauigkeit von ca. 2 Bogenminuten entspricht, was nahe am Auflösungvermögen des Auges liegt

    Augmented reality for computer assisted orthopaedic surgery

    Get PDF
    In recent years, computer-assistance and robotics have established their presence in operating theatres and found success in orthopaedic procedures. Benefits of computer assisted orthopaedic surgery (CAOS) have been thoroughly explored in research, finding improvements in clinical outcomes, through increased control and precision over surgical actions. However, human-computer interaction in CAOS remains an evolving field, through emerging display technologies including augmented reality (AR) – a fused view of the real environment with virtual, computer-generated holograms. Interactions between clinicians and patient-specific data generated during CAOS are limited to basic 2D interactions on touchscreen monitors, potentially creating clutter and cognitive challenges in surgery. Work described in this thesis sought to explore the benefits of AR in CAOS through: an integration between commercially available AR and CAOS systems, creating a novel AR-centric surgical workflow to support various tasks of computer-assisted knee arthroplasty, and three pre–clinical studies exploring the impact of the new AR workflow on both existing and newly proposed quantitative and qualitative performance metrics. Early research focused on cloning the (2D) user-interface of an existing CAOS system onto a virtual AR screen and investigating any resulting impacts on usability and performance. An infrared-based registration system is also presented, describing a protocol for calibrating commercial AR headsets with optical trackers, calculating a spatial transformation between surgical and holographic coordinate frames. The main contribution of this thesis is a novel AR workflow designed to support computer-assisted patellofemoral arthroplasty. The reported workflow provided 3D in-situ holographic guidance for CAOS tasks including patient registration, pre-operative planning, and assisted-cutting. Pre-clinical experimental validation on a commercial system (NAVIO®, Smith & Nephew) for these contributions demonstrates encouraging early-stage results showing successful deployment of AR to CAOS systems, and promising indications that AR can enhance the clinician’s interactions in the future. The thesis concludes with a summary of achievements, corresponding limitations and future research opportunities.Open Acces

    Inertial Navigation and Mapping for Autonomous Vehicles

    Full text link

    Intraoperative Planning and Execution of Arbitrary Orthopedic Interventions Using Handheld Robotics and Augmented Reality

    Get PDF
    The focus of this work is a generic, intraoperative and image-free planning and execution application for arbitrary orthopedic interventions using a novel handheld robotic device and optical see-through glasses (AR). This medical CAD application enables the surgeon to intraoperatively plan the intervention directly on the patient’s bone. The glasses and all the other instruments are accurately calibrated using new techniques. Several interventions show the effectiveness of this approach
    corecore