2 research outputs found

    Augmented Reality Future Step Visualization for Robust Surgical Telementoring

    Get PDF
    Introduction Surgical telementoring connects expert mentors with trainees performing urgent care in austere environments. However, such environments impose unreliable network quality, with significant latency and low bandwidth. We have developed an augmented reality telementoring system that includes future step visualization of the medical procedure. Pregenerated video instructions of the procedure are dynamically overlaid onto the trainee's view of the operating field when the network connection with a mentor is unreliable. Methods Our future step visualization uses a tablet suspended above the patient's body, through which the trainee views the operating field. Before trainee use, an expert records a “future library” of step-by-step video footage of the operation. Videos are displayed to the trainee as semitransparent graphical overlays. We conducted a study where participants completed a cricothyroidotomy under telementored guidance. Participants used one of two telementoring conditions: conventional telestrator or our system with future step visualization. During the operation, the connection between trainee and mentor was bandwidth throttled. Recorded metrics were idle time ratio, recall error, and task performance. Results Participants in the future step visualization condition had 48% smaller idle time ratio (14.5% vs. 27.9%, P < 0.001), 26% less recall error (119 vs. 161, P = 0.042), and 10% higher task performance scores (rater 1 = 90.83 vs. 81.88, P = 0.008; rater 2 = 88.54 vs. 79.17, P = 0.042) than participants in the telestrator condition. Conclusions Future step visualization in surgical telementoring is an important fallback mechanism when trainee/mentor network connection is poor, and it is a key step towards semiautonomous and then completely mentor-free medical assistance systems

    Removing spatial boundaries in immersive mobile communications

    Get PDF
    Despite a worldwide trend towards mobile computing, current telepresence experiences focus on stationary desktop computers, limiting how, when, and where researched solutions can be used. In this thesis I demonstrate that mobile phones are a capable platform for future research, showing the effectiveness of the communications possible through their inherent portability and ubiquity. I first describe a framework upon which future systems can be built, which allows two distant users to explore one of several panoramic representations of the local environment by reorienting their device. User experiments demonstrate this framework's ability to induce a sense of presence within the space and between users, and show that capturing this environment live provides no significant benefits over constructing it incrementally. This discovery enables a second application that allows users to explore a three-dimensional representation of their environment. Each user's position is shown as an avatar, with live facial capture to facilitate natural communication. Either may also see the full environment by occupying the same virtual space. This application is also evaluated and shown to provide efficient communications to its users, providing a novel untethered experience not possible on stationary hardware despite the inherent lack of computational ability available on mobile devices
    corecore