4 research outputs found

    Evaluation of the Oculus Rift S tracking system in room scale virtual reality

    Get PDF
    In specific virtual reality applications that require high accuracy it may be advisable to replace the built-in tracking system of the HMD with a third party solution. The purpose of this research work is to evaluate the accuracy of the built-in tracking system of the Oculus Rift S Head Mounted Display (HMD) in room scale environments against a motion capture system. In particular, an experimental evaluation of the Oculus Rift S inside-out tracking technology was carried out, compared to the performance of an outside-in tracking method based on the OptiTrack motion capture system. In order to track the pose of the HMD using the motion capture system the Oculus Rift S was instrumented with passive retro-reflective markers and calibrated. Experiments have been performed on a dataset of multiple paths including simple motions as well as more complex paths. Each recorded path contained simultaneous changes in both position and orientation of the HMD. Our results indicate that in room-scale environments the average translation error for the Oculus Rift S tracking system is about 1.83 cm, and the average rotation error is about 0. 77°, which is 2 orders of magnitude higher than the performance that can be achieved using a motion capture system

    Accuracy assessment for the co-registration between optical and VIVE head-mounted display tracking

    No full text
    © 2019, CARS. Purpose: We report on the development and accuracy assessment of a hybrid tracking system that integrates optical spatial tracking into a video pass-through head-mounted display. Methods: The hybrid system uses a dual-tracked co-calibration apparatus to provide a co-registration between the origins of an optical dynamic reference frame and the VIVE Pro controller through a point-based registration. This registration provides the location of optically tracked tools with respect to the VIVE controller’s origin and thus the VIVE’s tracking system. Results: The positional accuracy was assessed using a CNC machine to collect a grid of points with 25 samples per location. The positional trueness and precision for the hybrid tracking system were 0.48mm and 0.23mm, respectively. The rotational accuracy was assessed through inserting a stylus tracked by all three systems into a hemispherical phantom with cylindrical openings at known angles and collecting 25 samples per cylinder for each system. The rotational trueness and precision for the hybrid tracking system were 0. 64 ∘ and 0. 05 ∘, respectively. The difference in position and rotational trueness between the OTS and the hybrid tracking system was 0.27mm and 0. 04 ∘, respectively. Conclusions: We developed a hybrid tracking system that allows the pose of optically tracked surgical instruments to be known within a first-person HMD visualization system, achieving submillimeter accuracy. This research validated the positional and rotational accuracy of the hybrid tracking system and subsequently the optical tracking and VIVE tracking systems. This work provides a method to determine the position of an optically tracked surgical tool with a surgically acceptable accuracy within a low-cost commercial-grade video pass-through HMD. The hybrid tracking system provides the foundation for the continued development of virtual reality or augmented virtuality surgical navigation systems for training or practicing surgical techniques

    Towards Patient Specific Mitral Valve Modelling via Dynamic 3D Transesophageal Echocardiography

    Get PDF
    Mitral valve disease is a common pathologic problem occurring increasingly in an aging population, and many patients suffering from mitral valve disease require surgical intervention. Planning an interventional approach from diagnostic imaging alone remains a significant clinical challenge. Transesophageal echocardiography (TEE) is the primary imaging modality used diagnostically, it has limitations in image quality and field-of-view. Recently, developments have been made towards modelling patient-specific deformable mitral valves from TEE imaging, however, a major barrier to producing accurate valve models is the need to derive the leaflet geometry through segmentation of diagnostic TEE imaging. This work explores the development of volume compounding and automated image analysis to more accurately and quickly capture the relevant valve geometry needed to produce patient-specific mitral valve models. Volume compounding enables multiple ultrasound acquisitions from different orientations and locations to be aligned and blended to form a single volume with improved resolution and field-of-view. A series of overlapping transgastric views are acquired that are then registered together with the standard en-face image and are combined using a blending function. The resulting compounded ultrasound volumes allow the visualization of a wider range of anatomical features within the left heart, enhancing the capabilities of a standard TEE probe. In this thesis, I first describe a semi-automatic segmentation algorithm based on active contours designed to produce segmentations from end-diastole suitable for deriving 3D printable molds. Subsequently I describe the development of DeepMitral, a fully automatic segmentation pipeline which leverages deep learning to produce very accurate segmentations with a runtime of less than ten seconds. DeepMitral is the first reported method using convolutional neural networks (CNNs) on 3D TEE for mitral valve segmentations. The results demonstrate very accurate leaflet segmentations, and a reduction in the time and complexity to produce a patient-specific mitral valve replica. Finally, a real-time annulus tracking system using CNNs to predict the annulus coordinates in the spatial frequency domain was developed. This method facilitates the use of mitral annulus tracking in real-time guidance systems, and further simplifies mitral valve modelling through the automatic detection of the annulus, which is a key structure for valve quantification, and reproducing accurate leaflet dynamics

    Mixed-reality visualization environments to facilitate ultrasound-guided vascular access

    Get PDF
    Ultrasound-guided needle insertions at the site of the internal jugular vein (IJV) are routinely performed to access the central venous system. Ultrasound-guided insertions maintain high rates of carotid artery puncture, as clinicians rely on 2D information to perform a 3D procedure. The limitations of 2D ultrasound-guidance motivated the research question: “Do 3D ultrasound-based environments improve IJV needle insertion accuracy”. We addressed this by developing advanced surgical navigation systems based on tracked surgical tools and ultrasound with various visualizations. The point-to-line ultrasound calibration enables the use of tracked ultrasound. We automated the fiducial localization required for this calibration method such that fiducials can be automatically localized within 0.25 mm of the manual equivalent. The point-to-line calibration obtained with both manual and automatic localizations produced average normalized distance errors less than 1.5 mm from point targets. Another calibration method was developed that registers an optical tracking system and the VIVE Pro head-mounted display (HMD) tracking system with sub-millimetre and sub-degree accuracy compared to ground truth values. This co-calibration enabled the development of an HMD needle navigation system, in which the calibrated ultrasound image and tracked models of the needle, needle trajectory, and probe were visualized in the HMD. In a phantom experiment, 31 clinicians had a 96 % success rate using the HMD system compared to 70 % for the ultrasound-only approach (p= 0.018). We developed a machine-learning-based vascular reconstruction pipeline that automatically returns accurate 3D reconstructions of the carotid artery and IJV given sequential tracked ultrasound images. This reconstruction pipeline was used to develop a surgical navigation system, where tracked models of the needle, needle trajectory, and the 3D z-buffered vasculature from a phantom were visualized in a common coordinate system on a screen. This system improved the insertion accuracy and resulted in 100 % success rates compared to 70 % under ultrasound-guidance (p=0.041) across 20 clinicians during the phantom experiment. Overall, accurate calibrations and machine learning algorithms enable the development of advanced 3D ultrasound systems for needle navigation, both in an immersive first-person perspective and on a screen, illustrating that 3D US environments outperformed 2D ultrasound-guidance used clinically
    corecore