7,187 research outputs found

    Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature

    Get PDF
    © 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Art and Medicine: A Collaborative Project Between Virginia Commonwealth University in Qatar and Weill Cornell Medicine in Qatar

    Get PDF
    Four faculty researchers, two from Virginia Commonwealth University in Qatar, and two from Weill Cornell Medicine in Qatar developed a one semester workshop-based course in Qatar exploring the connections between art and medicine in a contemporary context. Students (6 art / 6 medicine) were enrolled in the course. The course included presentations by clinicians, medical engineers, artists, computing engineers, an art historian, a graphic designer, a painter, and other experts from the fields of art, design, and medicine. To measure the student experience of interdisciplinarity, the faculty researchers employed a mixed methods approach involving psychometric tests and observational ethnography. Data instruments included pre- and post-course semi-structured audio interviews, pre-test / post-test psychometric instruments (Budner Scale and Torrance Tests of Creativity), observational field notes, self-reflective blogging, and videography. This book describes the course and the experience of the students. It also contains images of the interdisciplinary work they created for a culminating class exhibition. Finally, the book provides insight on how different fields in a Middle Eastern context can share critical /analytical thinking tools to refine their own professional practices

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Tourist Destination Placement in Fiction Films: An Applied Research Proposal

    Get PDF
    The study here proposed draws on the concept of the so-called “film-induced tourism”, understood as a strategy for promoting tourist destinations in fiction films. This objective is pursued through the development of an analysis model that enables the identification of the unique features of the destination as a space where the events depicted in the films occur, their function and their relationship with other narrative variables. The model has been applied to a corpus of films that other researchers have identified as essential for the growth of tourism to the locations they present. The results suggest that the space needs to be referential, irrespective of whether the diegetic space is the same as the filming location, that its status should be closer to the character than to the setting, that its mise-en-scène should emphasize the interaction with other characters, or that it should be essential in events that shape the structure of the story.La investigación propuesta se inscribe dentro del denominado film induced tourism, entendido como una estrategia de promoción de los destinos turísticos en la ficción audiovisual. Para ello se ha desarrollado una plantilla de análisis que permite identificar las particularidades del destino, en tanto que espacio donde suceden los acontecimientos narrados en las películas, su función y su relación con otras variables narrativas. La plantilla se ha aplicado a un corpus de películas que otros investigadores han considerado esenciales en el incremento de los visitantes a los espacios que muestran. En los resultados puede apreciarse la importancia de que el espacio sea referencial, con independencia de si el de la diégesis y el del rodaje coinciden, que su estatuto esté más cercano al de personaje que al de ambiente, que su puesta en escena enfatice la interacción con otros personajes o que sea esencial en acontecimientos determinantes para la estructura de la historia

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Portable dVRK: an augmented V-REP simulator of the da Vinci Research Kit

    Get PDF
    The da Vinci Research Kit (dVRK) is a first generation da Vinci robot repurposed as a research platform and coupled with software and controllers developed by research users. An already quite wide community is currently sharing the dVRK (32 systems in 28 sites worldwide). The access to the robotic system for training surgeons and for developing new surgical procedures, tools and new control modalities is still difficult due to the limited availability and high maintenance costs. The development of simulation tools provides a low cost, easy and safe alternative to the use of the real platform for preliminary research and training activities. The Portable dVRK, which is described in this work, is based on a V-REP simulator of the dVRK patient side and endoscopic camera manipulators which are controlled through two haptic interfaces and a 3D viewer, respectively. The V-REP simulator is augmented with a physics engine allowing to render the interaction of new developed tools with soft objects. Full integration in the ROS control architecture makes the simulator flexible and easy to be interfaced with other possible devices. Several scenes have been implemented to illustrate performance and potentials of the developed simulator

    Colour for behavioural success

    Get PDF
    Colour information not only helps sustain the survival of animal species by guiding sexual selection and foraging behaviour but also is an important factor in the cultural and technological development of our own species. This is illustrated by examples from the visual arts and from state-of-the-art imaging technology, where the strategic use of colour has become a powerful tool for guiding the planning and execution of interventional procedures. The functional role of colour information in terms of its potential benefits to behavioural success across the species is addressed in the introduction here to clarify why colour perception may have evolved to generate behavioural success. It is argued that evolutionary and environmental pressures influence not only colour trait production in the different species but also their ability to process and exploit colour information for goal-specific purposes. We then leap straight to the human primate with insight from current research on the facilitating role of colour cues on performance training with precision technology for image-guided surgical planning and intervention. It is shown that local colour cues in two-dimensional images generated by a surgical fisheye camera help individuals become more precise rapidly across a limited number of trial sets in simulator training for specific manual gestures with a tool. This facilitating effect of a local colour cue on performance evolution in a video-controlled simulator (pick-and-place) task can be explained in terms of colour-based figure-ground segregation facilitating attention to local image parts when more than two layers of subjective surface depth are present, as in all natural and surgical images

    An Asynchronous Simulation Framework for Multi-User Interactive Collaboration: Application to Robot-Assisted Surgery

    Get PDF
    The field of surgery is continually evolving as there is always room for improvement in the post-operative health of the patient as well as the comfort of the Operating Room (OR) team. While the success of surgery is contingent upon the skills of the surgeon and the OR team, the use of specialized robots has shown to improve surgery-related outcomes in some cases. These outcomes are currently measured using a wide variety of metrics that include patient pain and recovery, surgeon’s comfort, duration of the operation and the cost of the procedure. There is a need for additional research to better understand the optimal criteria for benchmarking surgical performance. Presently, surgeons are trained to perform robot-assisted surgeries using interactive simulators. However, in the absence of well-defined performance standards, these simulators focus primarily on the simulation of the operative scene and not the complexities associated with multiple inputs to a real-world surgical procedure. Because interactive simulators are typically designed for specific robots that perform a small number of tasks controlled by a single user, they are inflexible in terms of their portability to different robots and the inclusion of multiple operators (e.g., nurses, medical assistants). Additionally, while most simulators provide high-quality visuals, simplification techniques are often employed to avoid stability issues for physics computation, contact dynamics and multi-manual interaction. This study addresses the limitations of existing simulators by outlining various specifications required to develop techniques that mimic real-world interactions and collaboration. Moreover, this study focuses on the inclusion of distributed control, shared task allocation and assistive feedback -- through machine learning, secondary and tertiary operators -- alongside the primary human operator
    corecore