2,014 research outputs found

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Towards Intelligent Telerobotics: Visualization and Control of Remote Robot

    Get PDF
    Human-machine cooperative or co-robotics has been recognized as the next generation of robotics. In contrast to current systems that use limited-reasoning strategies or address problems in narrow contexts, new co-robot systems will be characterized by their flexibility, resourcefulness, varied modeling or reasoning approaches, and use of real-world data in real time, demonstrating a level of intelligence and adaptability seen in humans and animals. The research I focused is in the two sub-field of co-robotics: teleoperation and telepresence. We firstly explore the ways of teleoperation using mixed reality techniques. I proposed a new type of display: hybrid-reality display (HRD) system, which utilizes commodity projection device to project captured video frame onto 3D replica of the actual target surface. It provides a direct alignment between the frame of reference for the human subject and that of the displayed image. The advantage of this approach lies in the fact that no wearing device needed for the users, providing minimal intrusiveness and accommodating users eyes during focusing. The field-of-view is also significantly increased. From a user-centered design standpoint, the HRD is motivated by teleoperation accidents, incidents, and user research in military reconnaissance etc. Teleoperation in these environments is compromised by the Keyhole Effect, which results from the limited field of view of reference. The technique contribution of the proposed HRD system is the multi-system calibration which mainly involves motion sensor, projector, cameras and robotic arm. Due to the purpose of the system, the accuracy of calibration should also be restricted within millimeter level. The followed up research of HRD is focused on high accuracy 3D reconstruction of the replica via commodity devices for better alignment of video frame. Conventional 3D scanner lacks either depth resolution or be very expensive. We proposed a structured light scanning based 3D sensing system with accuracy within 1 millimeter while robust to global illumination and surface reflection. Extensive user study prove the performance of our proposed algorithm. In order to compensate the unsynchronization between the local station and remote station due to latency introduced during data sensing and communication, 1-step-ahead predictive control algorithm is presented. The latency between human control and robot movement can be formulated as a linear equation group with a smooth coefficient ranging from 0 to 1. This predictive control algorithm can be further formulated by optimizing a cost function. We then explore the aspect of telepresence. Many hardware designs have been developed to allow a camera to be placed optically directly behind the screen. The purpose of such setups is to enable two-way video teleconferencing that maintains eye-contact. However, the image from the see-through camera usually exhibits a number of imaging artifacts such as low signal to noise ratio, incorrect color balance, and lost of details. Thus we develop a novel image enhancement framework that utilizes an auxiliary color+depth camera that is mounted on the side of the screen. By fusing the information from both cameras, we are able to significantly improve the quality of the see-through image. Experimental results have demonstrated that our fusion method compares favorably against traditional image enhancement/warping methods that uses only a single image

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Advanced Augmented Reality Telestration Techniques With Applications In Laparoscopic And Robotic Surgery

    Get PDF
    The art of teaching laparoscopic or robotic surgery currently has a primary reliance on an expert surgeon tutoring a student during a live surgery. During these operations, surgeons are viewing the inside of the body through a manipulatable camera. Due to the viewpoint translation and narrow field of view, these techniques have a substantial learning curve in order to gain the mastery necessary to operate safely. In addition to moving and rotating the camera, the surgeon must also manipulate tools inserted into the body. These tools are only visible on camera, and pass through a pivot point on the body that, in non-robotic cases, reverses their directions of motion when compared to the surgeon\u27s hands. These difficulties spurred on this dissertation. The main hypothesis of this research is that advanced augmented reality techniques can improve telementoring for use between expert surgeons and surgical students. In addition, it can provide a better method of communication between surgeon and camera operator. This research has two specific aims: (1) Create a head-mounted direction of focus indicator to provide non-verbal assistance for camera operation. A system was created to track where the surgeon is looking and provides augmented reality cues to the camera operator explaining the camera desires of the surgeon. (2) Create a hardware / software environment for the tracking of a camera and an object, allowing for the display of registered pre-operative imaging that can be manipulated during the procedure. A set of augmented reality cues describing the translation, zoom, and roll of a laparoscopic camera were developed for Aim 1. An experiment was run to determine whether using augmented reality cues or verbal cues was faster and more efficient at acquiring targets on camera at a specific location, zoom level, and roll angle. The study found that in all instances, the augmented reality cues resulted in faster completion of the task with better economy of movement than with the verbal cues. A large number of environmentally registered augmented reality telestration and visualization features were added to a hardware / software platform for Aim 2. The implemented manipulation of pre-operative imaging and the ability to provide different types of registered annotation in the working environment has provided numerous examples of improved utility in telementoring systems. The results of this work provide potential improvements to the utilization of pre-operative imaging in the operating room, to the effectiveness of telementoring as a surgical teaching tool, and to the effective communication between the surgeon and the camera operator in laparoscopic surgery

    Multimodality with Eye tracking and Haptics: A New Horizon for Serious Games?

    Get PDF
    The goal of this review is to illustrate the emerging use of multimodal virtual reality that can benefit learning-based games. The review begins with an introduction to multimodal virtual reality in serious games and we provide a brief discussion of why cognitive processes involved in learning and training are enhanced under immersive virtual environments. We initially outline studies that have used eye tracking and haptic feedback independently in serious games, and then review some innovative applications that have already combined eye tracking and haptic devices in order to provide applicable multimodal frameworks for learning-based games. Finally, some general conclusions are identified and clarified in order to advance current understanding in multimodal serious game production as well as exploring possible areas for new applications

    EnViSoRS: Enhanced Vision System for Robotic Surgery. A User-Defined Safety Volume Tracking to Minimize the Risk of Intraoperative Bleeding

    Get PDF
    open6siIn abdominal surgery, intra-operative bleeding is one of the major complications that affect the outcome of minimally invasive surgical procedures. One of the causes is attributed to accidental damages to arteries or veins, and one of the possible risk factors falls on the surgeon's skills. This paper presents the development and application of an Enhanced Vision System for Robotic Surgery (EnViSoRS), based on a user-defined Safety Volume (SV) tracking to minimise the risk of intra-operative bleeding. It aims at enhancing the surgeon's capabilities by providing Augmented Reality (AR) assistance towards the protection of vessels from injury during the execution of surgical procedures with a robot. The core of the framework consists in: (i) a hybrid tracking algorithm (LT-SAT tracker) that robustly follows a user-defined Safety Area (SA) in long term; (ii) a dense soft tissue 3D reconstruction algorithm, necessary for the computation of the SV; (iii) AR features for visualisation of the SV to be protected and of a graphical gauge indicating the current distance between the instruments and the reconstructed surface. EnViSoRS was integrated with a commercial robotic surgery system (the dVRK system) for testing and validation. The experiments aimed at demonstrating the accuracy, robustness, performance and usability of EnViSoRS during the execution of a simulated surgical task on a liver phantom. Results show an overall accuracy in accordance with surgical requirements (< 5mm), and high robustness in the computation of the SV in terms of precision and recall of its identification. The optimisation strategy implemented to speed up the computational time is also described and evaluated, providing AR features update rate up to 4 fps without impacting the real-time visualisation of the stereo endoscopic video. Finally, qualitative results regarding the system usability indicate that the proposed system integrates well with the commercial surgical robot and has indeed potential to offer useful assistance during real surgeries.openPenza, Veronica; De Momi, Elena; Enayati, Nima; Chupin, Thibaud; Ortiz, JesĂşs; Mattos, Leonardo S.Penza, Veronica; DE MOMI, Elena; Enayati, Nima; Chupin, THIBAUD JEAN EUDES; Ortiz, JesĂşs; Mattos, Leonardo S

    A Body-and-Mind-Centric Approach to Wearable Personal Assistants

    Get PDF

    Evaluating Human Performance for Image-Guided Surgical Tasks

    Get PDF
    The following work focuses on the objective evaluation of human performance for two different interventional tasks; targeted prostate biopsy tasks using a tracked biopsy device, and external ventricular drain placement tasks using a mobile-based augmented reality device for visualization and guidance. In both tasks, a human performance methodology was utilized which respects the trade-off between speed and accuracy for users conducting a series of targeting tasks using each device. This work outlines the development and application of performance evaluation methods using these devices, as well as details regarding the implementation of the mobile AR application. It was determined that the Fitts’ Law methodology can be applied for evaluation of tasks performed in each surgical scenario, and was sensitive to differentiate performance across a range which spanned experienced and novice users. This methodology is valuable for future development of training modules for these and other medical devices, and can provide details about the underlying characteristics of the devices, and how they can be optimized with respect to human performance

    ISMCR 1994: Topical Workshop on Virtual Reality. Proceedings of the Fourth International Symposium on Measurement and Control in Robotics

    Get PDF
    This symposium on measurement and control in robotics included sessions on: (1) rendering, including tactile perception and applied virtual reality; (2) applications in simulated medical procedures and telerobotics; (3) tracking sensors in a virtual environment; (4) displays for virtual reality applications; (5) sensory feedback including a virtual environment application with partial gravity simulation; and (6) applications in education, entertainment, technical writing, and animation

    Losing Touch:An embodiment perspective on coordination in robotic surgery

    Get PDF
    Because new technologies allow new performances, mediations, representations, and information flows, they are often associated with changes in how coordination is achieved. Current coordination research emphasizes its situated and emergent nature, but seldom accounts for the role of embodied action. Building on a 25-month field study of the da Vinci robot, an endoscopic system for minimally invasive surgery, we bring to the fore the role of the body in how coordination was reconfigured in response to a change in technological mediation. Using the robot, surgeons experienced both an augmentation and a reduction of what they can do with their bodies in terms of haptic, visual, and auditory perception and manipulative dexterity. These bodily augmentations and reductions affected joint task performance and led to coordinative adaptations (e.g., spatial relocating, redistributing tasks, accommodating novel perceptual dependencies, and mounting novel responses) that, over time, resulted in reconfiguration of roles, including expanded occupational knowledge, emergence of new specializations, and shifts in status and boundaries. By emphasizing the importance of the body in coordination, this paper suggests that an embodiment perspective is important for explaining how and why coordination evolves following the introduction of a new technology
    • …
    corecore