168 research outputs found

    An Overview of Self-Adaptive Technologies Within Virtual Reality Training

    Get PDF
    This overview presents the current state-of-the-art of self-adaptive technologies within virtual reality (VR) training. Virtual reality training and assessment is increasingly used for five key areas: medical, industrial & commercial training, serious games, rehabilitation and remote training such as Massive Open Online Courses (MOOCs). Adaptation can be applied to five core technologies of VR including haptic devices, stereo graphics, adaptive content, assessment and autonomous agents. Automation of VR training can contribute to automation of actual procedures including remote and robotic assisted surgery which reduces injury and improves accuracy of the procedure. Automated haptic interaction can enable tele-presence and virtual artefact tactile interaction from either remote or simulated environments. Automation, machine learning and data driven features play an important role in providing trainee-specific individual adaptive training content. Data from trainee assessment can form an input to autonomous systems for customised training and automated difficulty levels to match individual requirements. Self-adaptive technology has been developed previously within individual technologies of VR training. One of the conclusions of this research is that while it does not exist, an enhanced portable framework is needed and it would be beneficial to combine automation of core technologies, producing a reusable automation framework for VR training

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Design and Evaluation of a Contact-Free Interface for Minimally Invasive Robotics Assisted Surgery

    Get PDF
    Robotic-assisted minimally invasive surgery (RAMIS) is becoming increasingly more common for many surgical procedures. These minimally invasive techniques offer the benefit of reduced patient recovery time, mortality and scarring compared to traditional open surgery. Teleoperated procedures have the added advantage of increased visualization, and enhanced accuracy for the surgeon through tremor filtering and scaling down hand motions. There are however still limitations in these techniques preventing the widespread growth of the technology. In RAMIS, the surgeon is limited in their movement by the operating console or master device, and the cost of robotic surgery is often too high to justify for many procedures. Sterility issues arise as well, as the surgeon must be in contact with the master device, preventing a smooth transition between traditional and robotic modes of surgery. This thesis outlines the design and analysis of a novel method of interaction with the da Vinci Surgical Robot. Using the da Vinci Research Kit (DVRK), an open source research platform for the da Vinci robot, an interface was developed for controlling the robotic arms with the Leap Motion Controller. This small device uses infrared LEDs and two cameras to detect the 3D positions of the hand and fingers. This data from the hands is mapped to the da Vinci surgical tools in real time, providing the surgeon with an intuitive method of controlling the instruments. An analysis of the tracking workspace is provided, to give a solution to occlusion issues. Multiple sensors are fused together in order to increase the range of trackable motion over a single sensor. Additional work involves replacing the current viewing screen with a virtual reality (VR) headset (Oculus Rift), to provide the surgeon with a stereoscopic 3D view of the surgical site without the need for a large monitor. The headset also provides the user with a more intuitive and natural method of positioning the camera during surgery, using the natural motions of the head. The large master console of the da Vinci system has been replaced with an inexpensive vision based tracking system, and VR headset, allowing the surgeon to operate the da Vinci Surgical Robot with more natural movements for the user. A preliminary evaluation of the system is provided, with recommendations for future work

    AUTOMATIC PERFORMANCE LEVEL ASSESSMENT IN MINIMALLY INVASIVE SURGERY USING COORDINATED SENSORS AND COMPOSITE METRICS

    Get PDF
    Skills assessment in Minimally Invasive Surgery (MIS) has been a challenge for training centers for a long time. The emerging maturity of camera-based systems has the potential to transform problems into solutions in many different areas, including MIS. The current evaluation techniques for assessing the performance of surgeons and trainees are direct observation, global assessments, and checklists. These techniques are mostly subjective and can, therefore, involve a margin of bias. The current automated approaches are all implemented using mechanical or electromagnetic sensors, which suffer limitations and influence the surgeon’s motion. Thus, evaluating the skills of the MIS surgeons and trainees objectively has become an increasing concern. In this work, we integrate and coordinate multiple camera sensors to assess the performance of MIS trainees and surgeons. This study aims at developing an objective data-driven assessment that takes advantage of multiple coordinated sensors. The technical framework for the study is a synchronized network of sensors that captures large sets of measures from the training environment. The measures are then, processed to produce a reliable set of individual and composed metrics, coordinated in time, that suggest patterns of skill development. The sensors are non-invasive, real-time, and coordinated over many cues such as, eye movement, external shots of body and instruments, and internal shots of the operative field. The platform is validated by a case study of 17 subjects and 70 sessions. The results show that the platform output is highly accurate and reliable in detecting patterns of skills development and predicting the skill level of the trainees

    Real-Time Augmented Reality for Robotic-Assisted Surgery

    Get PDF

    Serious Games and Mixed Reality Applications for Healthcare

    Get PDF
    Virtual reality (VR) and augmented reality (AR) have long histories in the healthcare sector, offering the opportunity to develop a wide range of tools and applications aimed at improving the quality of care and efficiency of services for professionals and patients alike. The best-known examples of VR–AR applications in the healthcare domain include surgical planning and medical training by means of simulation technologies. Techniques used in surgical simulation have also been applied to cognitive and motor rehabilitation, pain management, and patient and professional education. Serious games are ones in which the main goal is not entertainment, but a crucial purpose, ranging from the acquisition of knowledge to interactive training.These games are attracting growing attention in healthcare because of their several benefits: motivation, interactivity, adaptation to user competence level, flexibility in time, repeatability, and continuous feedback. Recently, healthcare has also become one of the biggest adopters of mixed reality (MR), which merges real and virtual content to generate novel environments, where physical and digital objects not only coexist, but are also capable of interacting with each other in real time, encompassing both VR and AR applications.This Special Issue aims to gather and publish original scientific contributions exploring opportunities and addressing challenges in both the theoretical and applied aspects of VR–AR and MR applications in healthcare

    Recent Developments and Future Challenges in Medical Mixed Reality

    Get PDF
    As AR technology matures, we have seen many applicationsemerge in entertainment, education and training. However, the useof AR is not yet common in medical practice, despite the great po-tential of this technology to help not only learning and training inmedicine, but also in assisting diagnosis and surgical guidance. Inthis paper, we present recent trends in the use of AR across all med-ical specialties and identify challenges that must be overcome tonarrow the gap between academic research and practical use of ARin medicine. A database of 1403 relevant research papers publishedover the last two decades has been reviewed by using a novel re-search trend analysis method based on text mining algorithm. Wesemantically identified 10 topics including varies of technologiesand applications based on the non-biased and in-personal cluster-ing results from the Latent Dirichlet Allocatio (LDA) model andanalysed the trend of each topic from 1995 to 2015. The statisticresults reveal a taxonomy that can best describes the developmentof the medical AR research during the two decades. And the trendanalysis provide a higher level of view of how the taxonomy haschanged and where the focus will goes. Finally, based on the valu-able results, we provide a insightful discussion to the current limi-tations, challenges and future directions in the field. Our objectiveis to aid researchers to focus on the application areas in medicalAR that are most needed, as well as providing medical practitioners with latest technology advancements

    Augmented reality in medical education: a systematic review

    Get PDF
    Introduction: The field of augmented reality (AR) is rapidly growing with many new potential applications in medical education. This systematic review investigated the current state of augmented reality applications (ARAs) and developed an analytical model to guide future research in assessing ARAs as teaching tools in medical education. Methods: A literature search was conducted using PubMed, Embase, Web of Science, Cochrane Library, and Google Scholar. This review followed PRISMA guidelines and included publications from January 1, 2000 to June 18, 2018. Inclusion criteria were experimental studies evaluating ARAs implemented in healthcare education published in English. Our review evaluated study quality and determined whether studies assessed ARA validity using criteria established by the GRADE Working Group and Gallagher et al., respectively. These findings were used to formulate an analytical model to assess the readiness of ARAs for implementation in medical education. Results: We identified 100,807 articles in the initial literature search; 36 met inclusion criteria for final review and were categorized into three categories: Surgery (23), Anatomy (9), and Other (4). The overall quality of the studies was poor and no ARA was tested for all five stages of validity. Our analytical model evaluates the importance of research quality, application content, outcomes, and feasibility of an ARA to gauge its readiness for implementation. Conclusion: While AR technology is growing at a rapid rate, the current quality and breadth of AR research in medical training is insufficient to recommend the adoption into educational curricula. We hope our analytical model will help standardize AR assessment methods and define the role of AR technology in medical education

    Evaluation of contactless human–machine interface for robotic surgical training

    Get PDF
    Purpose Teleoperated robotic systems are nowadays routinely used for specific interventions. Benefits of robotic training courses have already been acknowledged by the community since manipulation of such systems requires dedicated training. However, robotic surgical simulators remain expensive and require a dedicated human–machine interface. Methods We present a low-cost contactless optical sensor, the Leap Motion, as a novel control device to manipulate the RAVEN-II robot. We compare peg manipulations during a training task with a contact-based device, the electro-mechanical Sigma.7. We perform two complementary analyses to quantitatively assess the performance of each control method: a metric-based comparison and a novel unsupervised spatiotemporal trajectory clustering. Results We show that contactless control does not offer as good manipulability as the contact-based. Where part of the metric-based evaluation presents the mechanical control better than the contactless one, the unsupervised spatiotemporal trajectory clustering from the surgical tool motions highlights specific signature inferred by the human–machine interfaces. Conclusions Even if the current implementation of contactless control does not overtake manipulation with high-standard mechanical interface, we demonstrate that using the optical sensor complete control of the surgical instruments is feasible. The proposed method allows fine tracking of the trainee’s hands in order to execute dexterous laparoscopic training gestures. This work is promising for development of future human–machine interfaces dedicated to robotic surgical training systems
    • …
    corecore