2 research outputs found

    A randomised trial of observational learning from 2D and 3D models in robotically assisted surgery

    Get PDF
    This is the final version of the article. Available from the publisher via the DOI in this record.BACKGROUND: Advances in 3D technology mean that both robotic surgical devices and surgical simulators can now incorporate stereoscopic viewing capabilities. While depth information may benefit robotic surgical performance, it is unclear whether 3D viewing also aids skill acquisition when learning from observing others. As observational learning plays a major role in surgical skills training, this study aimed to evaluate whether 3D viewing provides learning benefits in a robotically assisted surgical task. METHODS: 90 medical students were assigned to either (1) 2D or (2) 3D observation of a consultant surgeon performing a training task on the daVinci S robotic system, or (3) a no observation control, in a randomised parallel design. Subsequent performance and instrument movement metrics were assessed immediately following observation and at one-week retention. RESULTS: Both 2D and 3D groups outperformed no observation controls following the observation intervention (ps < 0.05), but there was no difference between 2D and 3D groups at any of the timepoints. There was also no difference in movement parameters between groups. CONCLUSIONS: While 3D viewing systems may have beneficial effects for surgical performance, these results suggest that depth information has limited utility during observational learning of surgical skills in novices. The task constraints and end goals may provide more important information for learning than the relative motion of surgical instruments in 3D space.This research was supported by an Intuitive Surgical grant awarded to Dr G Buckingha

    Color and depth sensing sensor technologies for robotics and machine vision

    No full text
    Robust scanning technologies that offer 3D view of the world in real time are critical for situational awareness and safe operation of robotic and autonomous systems. Color and depth sensing technologies play an important role in localization and navigation in unstructured environments. Most often, sensor technology must be able to deal with factors such as objects that have low textures or objects that are dynamic, soft, and deformable. Adding intelligence to the imaging system has great potential in simplifying some of the problems. This chapter discusses the important role of scanning technologies in the development of trusted autonomous systems for robotic and machine vision with an outlook for areas that need further research and development. We start with a review of sensor technologies for specific environments including autonomous systems, mining, medical, social, aerial, and marine robotics. Special focus is on the selection of a particular scanning technology to deal with constrained or unconstrained environments. Fundamentals, advantages, and limitations of color and depth (RGB-D) technologies such as stereo vision, time of flight, structured light, and shape from shadow are discussed in detail. Strategies to deal with lighting, color constancy, occlusions, scattering, haze, and multiple reflections are discussed. This chapter also introduces the latest developments in this area by discussing the potential of emerging technologies, such as dynamic vision and focus-induced photoluminescence
    corecore