47 research outputs found

    Interactive Planning of Cryotherapy Using Physically-Based Simulation

    Get PDF
    International audienceCryotherapy is a rapidly growing minimally invasive technique for the treatment of certain tumors. It consists in destroying cancer cells by extreme cold delivered at the tip of a needle-like probe. As the resulting iceball is often smaller than the targeted tumor, a key to the success of cryotherapy is the planning of the position and orientation of the multiple probes required to treat a tumor, while avoiding any damage to the surrounding tissues. In order to provide such a planning tool, a number of challenges need to be addressed such as fast and accurate computation of the freezing process or interactive positioning of the virtual cryoprobes in the pre-operative image volume. To address these challenges, we present an approach which relies on an advanced computational framework, and a gesture-based planning system using contact-less technology to remain compatible with a use in a sterile environment

    Módulo para el control sin contacto de imágenes diagnósticas en la sala de cirugía con el sistema Leap Motion y el Software 3D Slicer

    Get PDF
    RESUMEN: Durante los procedimientos quirúrgicos es importante que el personal (cirujanos, residentes o asistentes) interactúe con el paciente, evitando cualquier contacto físico con equipo y materiales que pudieron no ser esterilizados apropiadamente. Esto se hace con el fin de evitar al paciente infecciones y complicaciones posteriores a la cirugía. Con el aumento de la disponibilidad de imágenes diagnósticas esta herramienta se ha hecho cada vez más indispensable en los quirófanos, pero no siempre es posible mantener el control de asepsia de los equipos informáticos en los cuales se ejecutan los programas de visualización, factor que dificulta el acceso al personal asistencial a la información contenida en las imágenes. En este trabajo se presenta el desarrollo de un sistema que permite manipular un programa de visualización de imágenes diagnósticas mediante gestos evitando que el cirujano tenga contacto directo con la computadora. El sistema, que requiere una computadora con el software 3D-Slicer y el dispositivo Leap Motion, permite mediante gestos realizados con las manos acceder a operaciones básicas como el movimiento entre cortes de un volumen, cambio del tamaño de la imagen y cambio del plano anatómico de visualización, operaciones que para el cirujano son esenciales para la ubicación espacial y la toma de decisiones.ABSTRACT: During surgical procedures, it is important that the personnel (surgeon, residents, or assistants) interact with the patient avoiding any physical contact with equipment and materials that might have not been appropriately sterilized. This is done in order to prevent patient’s infections and complications after surgery. With the increased availability of diagnostic images, this technology has become indispensable in operating rooms but to maintain asepsis control of computer equipment on which the visualization programs are executed is not always possible, hindering access to personnel to information contained in the images. This paper describes the development of a system that allows the personnel to manipulate a medical imaging display program using gestures, avoiding the surgeon or the nurse to have a direct contact with the computer. The system, which requires a computer with 3D-Slicer software and Leap Motion (LM) device, allows through gestures made with the hands, to access the basic operations such as the movement between sections of a volume, to change the image size and the anatomical plane visualization; operations that are essential to the surgeon for the spatial location and decision making

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    Exploring glass as a novel method for hands-free data entry in flexible cystoscopy

    Get PDF
    We present a way to annotate cystoscopy finding on Google Glass in a reproducible and hands free manner for use by surgeons during operations in the sterile environment inspired by the current practice of hand-drawn sketches. We developed three data entry variants based on speech and head movements. We assessed the feasibility, benefits and drawbacks of the system with 8 surgeons and Foundation Doctors having up to 30 years' cystoscopy experience at a UK hospital in laboratory trials. We report data entry speed and error rate of input modalities and contrast it with the participants' feedback on their perception of usability, acceptance, and suitability for deployment. The results are supportive of new data entry technologies and point out directions for future improvement of eyewear computers. The findings can be generalised to other endoscopic procedures (e.g. OGD/laryngoscopy) and could be included within hospital IT in the future

    The State of the Art of Spatial Interfaces for 3D Visualization

    Get PDF
    International audienceWe survey the state of the art of spatial interfaces for 3D visualization. Interaction techniques are crucial to data visualization processes and the visualization research community has been calling for more research on interaction for years. Yet, research papers focusing on interaction techniques, in particular for 3D visualization purposes, are not always published in visualization venues, sometimes making it challenging to synthesize the latest interaction and visualization results. We therefore introduce a taxonomy of interaction technique for 3D visualization. The taxonomy is organized along two axes: the primary source of input on the one hand and the visualization task they support on the other hand. Surveying the state of the art allows us to highlight specific challenges and missed opportunities for research in 3D visualization. In particular, we call for additional research in: (1) controlling 3D visualization widgets to help scientists better understand their data, (2) 3D interaction techniques for dissemination, which are under-explored yet show great promise for helping museum and science centers in their mission to share recent knowledge, and (3) developing new measures that move beyond traditional time and errors metrics for evaluating visualizations that include spatial interaction

    Understanding interaction mechanics in touchless target selection

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)We use gestures frequently in daily life—to interact with people, pets, or objects. But interacting with computers using mid-air gestures continues to challenge the design of touchless systems. Traditional approaches to touchless interaction focus on exploring gesture inputs and evaluating user interfaces. I shift the focus from gesture elicitation and interface evaluation to touchless interaction mechanics. I argue for a novel approach to generate design guidelines for touchless systems: to use fundamental interaction principles, instead of a reactive adaptation to the sensing technology. In five sets of experiments, I explore visual and pseudo-haptic feedback, motor intuitiveness, handedness, and perceptual Gestalt effects. Particularly, I study the interaction mechanics in touchless target selection. To that end, I introduce two novel interaction techniques: touchless circular menus that allow command selection using directional strokes and interface topographies that use pseudo-haptic feedback to guide steering–targeting tasks. Results illuminate different facets of touchless interaction mechanics. For example, motor-intuitive touchless interactions explain how our sensorimotor abilities inform touchless interface affordances: we often make a holistic oblique gesture instead of several orthogonal hand gestures while reaching toward a distant display. Following the Gestalt theory of visual perception, we found similarity between user interface (UI) components decreased user accuracy while good continuity made users faster. Other findings include hemispheric asymmetry affecting transfer of training between dominant and nondominant hands and pseudo-haptic feedback improving touchless accuracy. The results of this dissertation contribute design guidelines for future touchless systems. Practical applications of this work include the use of touchless interaction techniques in various domains, such as entertainment, consumer appliances, surgery, patient-centric health settings, smart cities, interactive visualization, and collaboration

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications
    corecore