2,914 research outputs found

    The future of robotic surgery

    Get PDF
    © 2018 Royal College of Surgeons.For 20 years Intuitive Surgical’s da Vinci® system has held the monopoly in minimally invasive robotic surgery. Restrictive patenting, a well-developed marketing strategy and a high-quality product have protected the company’s leading market share.1 However, owing to the nuances of US patenting law, many of Intuitive Surgical’s earliest patents will be expiring in the next couple of years. With such a shift in backdrop, many of Intuitive Surgical’s competitors (from medical and industrial robotic backgrounds) have initiated robotic programmes – some of which are available for clinical use now. The next section of the review will focus on new and developing robotic systems in the field of minimally invasive surgery (Table 1), single-site surgery (Table 2), natural orifice transluminal endoscopic surgery (NOTES) and non-minimally invasive robotic systems (Table 3).Peer reviewedFinal Published versio

    3D Virtual Worlds and the Metaverse: Current Status and Future Possibilities

    Get PDF
    Moving from a set of independent virtual worlds to an integrated network of 3D virtual worlds or Metaverse rests on progress in four areas: immersive realism, ubiquity of access and identity, interoperability, and scalability. For each area, the current status and needed developments in order to achieve a functional Metaverse are described. Factors that support the formation of a viable Metaverse, such as institutional and popular interest and ongoing improvements in hardware performance, and factors that constrain the achievement of this goal, including limits in computational methods and unrealized collaboration among virtual world stakeholders and developers, are also considered

    Design, validation and implementation of a virtual reality high fidelity laparoscopic appendicectomy curriculum

    Get PDF
    INTRODUCTION: The treatment for acute appendicitis is laparoscopic appendicectomy (LA), usually performed by trainees who face significant challenges to training. Simulation curricula are being increasingly utilised and optimised to accelerate learning and improve skill retention in a safe environment. The aim of this study is to produce and implement a virtual reality (VR) curriculum for laparoscopic appendicectomy (LA) on the high-fidelity LAP Mentor VR simulator. METHODOLOGY: Performance data of randomised experts and novices were compared to assess the construct validity of the LAP Mentor basic skills (BS) and LA modules. Face validity of the simulator and module was assessed by questionnaire. These results informed the construction of a VR LA curriculum on an evidence-based theoretical framework. The curriculum was implemented and evaluated by analysis of participant diaries. RESULTS: Thirty-five novices and 25 experienced surgeons performed either BS, five LA procedural tasks or the LA full procedure. Both modules demonstrated construct validity. The LA module was deemed moderately realistic and useful for developing laparoscopic psychomotor skills. Seven novice trainees completed the new LA curriculum (three others dropped out). Analysis of participants diaries revealed the presence of frustration, the benefits of feedback sessions and the advantages and pitfalls of open access. DISCUSSION: Evaluations of the implementation of similar curricula are rare and participant diaries led to critical insights. The curriculum was difficult and sometimes frustrating, mitigated by rewarding experiences and coaching. The latter facilitated deliberate practice. Scheduling issues were mitigated by open access. Limitations of the curricula include the invariability in the presentation of appendicitis, and the reason for dropouts are not known. CONCLUSION: Several BS and all LA tasks are construct-valid. A new VR LA curriculum was implemented and analysis of participant diaries yielded critical insights into real-world implementation. Future study should investigate its effect on real-world performance and patient outcomes

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    Augmented visual, auditory, haptic, and multimodal feedback in motor learning: A review

    Get PDF
    It is generally accepted that augmented feedback, provided by a human expert or a technical display, effectively enhances motor learning. However, discussion of the way to most effectively provide augmented feedback has been controversial. Related studies have focused primarily on simple or artificial tasks enhanced by visual feedback. Recently, technical advances have made it possible also to investigate more complex, realistic motor tasks and to implement not only visual, but also auditory, haptic, or multimodal augmented feedback. The aim of this review is to address the potential of augmented unimodal and multimodal feedback in the framework of motor learning theories. The review addresses the reasons for the different impacts of feedback strategies within or between the visual, auditory, and haptic modalities and the challenges that need to be overcome to provide appropriate feedback in these modalities, either in isolation or in combination. Accordingly, the design criteria for successful visual, auditory, haptic, and multimodal feedback are elaborate

    A Prospective Look: Key Enabling Technologies, Applications and Open Research Topics in 6G Networks

    Get PDF
    The fifth generation (5G) mobile networks are envisaged to enable a plethora of breakthrough advancements in wireless technologies, providing support of a diverse set of services over a single platform. While the deployment of 5G systems is scaling up globally, it is time to look ahead for beyond 5G systems. This is driven by the emerging societal trends, calling for fully automated systems and intelligent services supported by extended reality and haptics communications. To accommodate the stringent requirements of their prospective applications, which are data-driven and defined by extremely low-latency, ultra-reliable, fast and seamless wireless connectivity, research initiatives are currently focusing on a progressive roadmap towards the sixth generation (6G) networks. In this article, we shed light on some of the major enabling technologies for 6G, which are expected to revolutionize the fundamental architectures of cellular networks and provide multiple homogeneous artificial intelligence-empowered services, including distributed communications, control, computing, sensing, and energy, from its core to its end nodes. Particularly, this paper aims to answer several 6G framework related questions: What are the driving forces for the development of 6G? How will the enabling technologies of 6G differ from those in 5G? What kind of applications and interactions will they support which would not be supported by 5G? We address these questions by presenting a profound study of the 6G vision and outlining five of its disruptive technologies, i.e., terahertz communications, programmable metasurfaces, drone-based communications, backscatter communications and tactile internet, as well as their potential applications. Then, by leveraging the state-of-the-art literature surveyed for each technology, we discuss their requirements, key challenges, and open research problems

    THE EFFECT OF HAPTIC INTERACTION AND LEARNER CONTROL ON STUDENT PERFORMANCE IN AN ONLINE DISTANCE EDUCATION COURSE

    Get PDF
    Today’s learners are taking advantage of a whole new world of multimedia and hypermedia experiences to gain understanding and construct knowledge. While at the same time, teachers and instructional designers are producing these experiences at rapid paces. Many angles of interactivity with digital content continue to be researched, as is the case with this study. The purpose of this study is to determine whether there is a significant difference in the performance of distance education students who exercise learner control interactivity effectively through a traditional input device versus students who exercise learner control interactivity through haptic input methods. This study asks three main questions about the relationship and potential impact touch input had on the interactivity sequence a learner chooses while participating in an online distance education course. Effects were measured by using criterion from logged assessments within one module of a distance education course. This study concludes that learner control sequence choices did have significant effects on learner outcomes. However, input method did not. The sequence that learners chose had positive effects on scores, the number of attempts it took to pass assessments, and the overall range of scores per assessment attempts. Touch input learners performed as well as traditional input learners, and summative first sequence learners outperformed all other learners. These findings support the beliefs that new input methods are not detrimental and that learner-controlled options while participating in digital online courses are valuable for learners, under certain conditions

    Simulating molecular docking with haptics

    Get PDF
    Intermolecular binding underlies various metabolic and regulatory processes of the cell, and the therapeutic and pharmacological properties of drugs. Molecular docking systems model and simulate these interactions in silico and allow the study of the binding process. In molecular docking, haptics enables the user to sense the interaction forces and intervene cognitively in the docking process. Haptics-assisted docking systems provide an immersive virtual docking environment where the user can interact with the molecules, feel the interaction forces using their sense of touch, identify visually the binding site, and guide the molecules to their binding pose. Despite a forty-year research e�ort however, the docking community has been slow to adopt this technology. Proprietary, unreleased software, expensive haptic hardware and limits on processing power are the main reasons for this. Another signi�cant factor is the size of the molecules simulated, limited to small molecules. The focus of the research described in this thesis is the development of an interactive haptics-assisted docking application that addresses the above issues, and enables the rigid docking of very large biomolecules and the study of the underlying interactions. Novel methods for computing the interaction forces of binding on the CPU and GPU, in real-time, have been developed. The force calculation methods proposed here overcome several computational limitations of previous approaches, such as precomputed force grids, and could potentially be used to model molecular exibility at haptic refresh rates. Methods for force scaling, multipoint collision response, and haptic navigation are also reported that address newfound issues, particular to the interactive docking of large systems, e.g. force stability at molecular collision. The i ii result is a haptics-assisted docking application, Haptimol RD, that runs on relatively inexpensive consumer level hardware, (i.e. there is no need for specialized/proprietary hardware)
    • …
    corecore