313 research outputs found

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Augmented Reality

    Get PDF
    Augmented Reality (AR) is a natural development from virtual reality (VR), which was developed several decades earlier. AR complements VR in many ways. Due to the advantages of the user being able to see both the real and virtual objects simultaneously, AR is far more intuitive, but it's not completely detached from human factors and other restrictions. AR doesn't consume as much time and effort in the applications because it's not required to construct the entire virtual scene and the environment. In this book, several new and emerging application areas of AR are presented and divided into three sections. The first section contains applications in outdoor and mobile AR, such as construction, restoration, security and surveillance. The second section deals with AR in medical, biological, and human bodies. The third and final section contains a number of new and useful applications in daily living and learning

    Framework for augmented reality in Minimally Invasive laparoscopic surgery

    Get PDF
    International audienceThis article presents a framework for fusing pre-operative data and intra-operative data for surgery guidance. This framework is employed in the context of Minimally Invasive Surgery (MIS) of the liver. From stereoscopic images a three dimensional point cloud is reconstructed in real-time. This point cloud is then used to register a patient-specific biomechanical model derived from Computed Tomography images onto the laparoscopic view. In this way internal structures such as vessels and tumors can be visualized to help the surgeon during the procedure. This is particularly relevant since abdominal organs undergo large deformations in the course of the surgery, making it difficult for surgeons to correlate the laparoscopic view with the pre-operative images. Our method has the potential to reduce the duration of the operation as the biomechanical model makes it possible to estimate the in-depth position of tumors and vessels at any time of the surgery, which is essential to the surgical decision process. Results show that our method can be successfully applied during laparoscopic procedure without interfering with the surgical work flow

    Bio-Inspired Multi-Spectral Imaging Sensors and Algorithms for Image Guided Surgery

    Get PDF
    Image guided surgery (IGS) utilizes emerging imaging technologies to provide additional structural and functional information to the physician in clinical settings. This additional visual information can help physicians delineate cancerous tissue during resection as well as avoid damage to near-by healthy tissue. Near-infrared (NIR) fluorescence imaging (700 nm to 900 nm wavelengths) is a promising imaging modality for IGS, namely for the following reasons: First, tissue absorption and scattering in the NIR window is very low, which allows for deeper imaging and localization of tumor tissue in the range of several millimeters to a centimeter depending on the tissue surrounding the tumor. Second, spontaneous tissue fluorescence emission is minimal in the NIR region, allowing for high signal-to-background ratio imaging compared to visible spectrum fluorescence imaging. Third, decoupling the fluorescence signal from the visible spectrum allows for optimization of NIR fluorescence while attaining high quality color images. Fourth, there are two FDA approved fluorescent dyes in the NIR region—namely methylene blue (MB) and indocyanine green—which can help to identify tumor tissue due to passive accumulation in human subjects. The aforementioned advantages have led to the development of NIR fluorescence imaging systems for a variety of clinical applications, such as sentinel lymph node imaging, angiography, and tumor margin assessment. With these technological advances, secondary surgeries due to positive tumor margins or damage to healthy organs can be largely mitigated, reducing the emotional and financial toll on the patient. Currently, several NIR fluorescence imaging systems (NFIS) are available commercially or are undergoing clinical trials, such as FLARE, SPY, PDE, Fluobeam, and others. These systems capture multi-spectral images using complex optical equipment and are combined with real-time image processing to present an augmented view to the surgeon. The information is presented on a standard monitor above the operating bed, which requires the physician to stop the surgical procedure and look up at the monitor. The break in the surgical flow sometimes outweighs the benefits of fluorescence based IGS, especially in time-critical surgical situations. Furthermore, these instruments tend to be very bulky and have a large foot print, which significantly complicates their adoption in an already crowded operating room. In this document, I present the development of a compact and wearable goggle system capable of real-time sensing of both NIR fluorescence and color information. The imaging system is inspired by the ommatidia of the monarch butterfly, in which pixelated spectral filters are integrated with light sensitive elements. The pixelated spectral filters are fabricated via a carefully optimized nanofabrication procedure and integrated with a CMOS imaging array. The entire imaging system has been optimized for high signal-to-background fluorescence imaging using an analytical approach, and the efficacy of the system has been experimentally verified. The bio-inspired spectral imaging sensor is integrated with an FPGA for compact and real-time signal processing and a wearable goggle for easy integration in the operating room. The complete imaging system is undergoing clinical trials at Washington University in the St. Louis Medical School for imaging sentinel lymph nodes in both breast cancer patients and melanoma patients

    Recognition of Instrument Passing and Group Attention for Understanding Intraoperative State of Surgical Team

    Get PDF
    Appropriate evaluation of the intraoperative state of a surgical team is essential for the improvement of teamwork and hence a safe surgical environment. Traditional methods to evaluate intraoperative team states such as interview and self-check questionnaire on each surgical team member often require human efforts, which are time-consuming and can be biased by individual recall. One effective solution is to analyze the surgical video and track the important team activities, such as whether the members are complying with the surgical procedure or are being distracted by unexpected events. However, due to the complexity of the situations in an operating room, identifying the team activities without any human effort remains challenging. In this work, we propose a novel approach that automatically recognizes and quantifies intraoperative activities from surgery videos. As a first step, we focus on recognizing two activities that especially involve multiple individuals: (a) passing of clean-packaged surgery instruments which is a representative interaction between the surgical technologists such as the circulating nurse and scrub nurse, and (b) group attention that may be attracted by unexpected events. We record surgical videos as input, and apply pose estimation and particle filters to extract individual's face orientation, body orientation, and arm raise. These results coupled with individual IDs are then sent to an estimation model that provides the probability of each target activity. Simultaneously, a person model is generated and bound to each individual, which describes all the involved activities along the timeline. We tested our method using videos of simulated activities. The results showed that the system was able to recognize instrument passing and group attention with F1 = 0.95 and F1 = 0.66, respectively. We also implemented a system with an interface that automatically annotated intraoperative activities along the video timeline, and invited feedback from surgical technologists. The results suggest that the quantified and visualized activities can help improve understanding of the intraoperative state of the surgical team

    Wearable Devices and their Implementation in Various Domains

    Get PDF
    Wearable technologies are networked devices that collect data, track activities and customize experiences to users? needs and desires. They are equipped, with microchips sensors and wireless communications. All are mounted into consumer electronics, accessories and clothes. They use sensors to measure temperature, humidity, motion, heartbeat and more. Wearables are embedded in various domains, such as healthcare, sports, agriculture and navigation systems. Each wearable device is equipped with sensors, network ports, data processor, camera and more. To allow monitoring and synchronizing multiple parameters, typical wearables have multi-sensor capabilities and are configurable for the application purpose. For the wearer?s convenience, wearables are lightweight, modest shape and multifunctional. Wearables perform the following tasks: sense, analyze, store, transmit and apply. The processing may occur on the wearer or at a remote location. For example, if dangerous gases are detected, the data are processed, and an alert is issued. It may be transmitted to a remote location for testing and the results can be communicated in real-time to the user. Each scenario requires personalized mobile information processing, which transforms the sensory data to information and then to knowledge that will be of value to the individual responding to the situation
    corecore