2,864 research outputs found

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    A new head-mounted display-based augmented reality system in neurosurgical oncology: a study on phantom

    Get PDF
    Purpose: Benefits of minimally invasive neurosurgery mandate the development of ergonomic paradigms for neuronavigation. Augmented Reality (AR) systems can overcome the shortcomings of commercial neuronavigators. The aim of this work is to apply a novel AR system, based on a head-mounted stereoscopic video see-through display, as an aid in complex neurological lesion targeting. Effectiveness was investigated on a newly designed patient-specific head mannequin featuring an anatomically realistic brain phantom with embedded synthetically created tumors and eloquent areas. Materials and methods: A two-phase evaluation process was adopted in a simulated small tumor resection adjacent to Brocaâ\u80\u99s area. Phase I involved nine subjects without neurosurgical training in performing spatial judgment tasks. In Phase II, three surgeons were involved in assessing the effectiveness of the AR-neuronavigator in performing brain tumor targeting on a patient-specific head phantom. Results: Phase I revealed the ability of the AR scene to evoke depth perception under different visualization modalities. Phase II confirmed the potentialities of the AR-neuronavigator in aiding the determination of the optimal surgical access to the surgical target. Conclusions: The AR-neuronavigator is intuitive, easy-to-use, and provides three-dimensional augmented information in a perceptually-correct way. The system proved to be effective in guiding skin incision, craniotomy, and lesion targeting. The preliminary results encourage a structured study to prove clinical effectiveness. Moreover, our testing platform might be used to facilitate training in brain tumour resection procedures

    Appl Ergon

    Get PDF
    Physical work demands and posture constraint from operating microscopes may adversely affect microsurgeon health and performance. Alternative video displays were developed to reduce posture constraints. Their effects on postures, perceived efforts, and performance were compared with the microscope. Sixteen participants performed microsurgery skill tasks using both stereo and non-stereoscopic microscopes and video displays. Results showed that neck angles were 9-13\ub0 more neutral and shoulder flexion were 9-10\ub0 more elevated on the video display than the microscope. Time observed in neck extension was higher (30% vs. 17%) and neck movements were 3x more frequent on the video display than microscopes. Ratings of perceived efforts did not differ among displays, but usability ratings were better on the microscope than the video display. Performance times on the video displays were 66-110% slower than microscopes. Although postures improved, further research is needed to improve task performance on video displays.T42 OH008455/OH/NIOSH CDC HHS/United States2017-12-20T00:00:00Z26585502PMC573793

    Ultrasound-Augmented Laparoscopy

    Get PDF
    Laparoscopic surgery is perhaps the most common minimally invasive procedure for many diseases in the abdomen. Since the laparoscopic camera provides only the surface view of the internal organs, in many procedures, surgeons use laparoscopic ultrasound (LUS) to visualize deep-seated surgical targets. Conventionally, the 2D LUS image is visualized in a display spatially separate from that displays the laparoscopic video. Therefore, reasoning about the geometry of hidden targets requires mentally solving the spatial alignment, and resolving the modality differences, which is cognitively very challenging. Moreover, the mental representation of hidden targets in space acquired through such cognitive medication may be error prone, and cause incorrect actions to be performed. To remedy this, advanced visualization strategies are required where the US information is visualized in the context of the laparoscopic video. To this end, efficient computational methods are required to accurately align the US image coordinate system with that centred in the camera, and to render the registered image information in the context of the camera such that surgeons perceive the geometry of hidden targets accurately. In this thesis, such a visualization pipeline is described. A novel method to register US images with a camera centric coordinate system is detailed with an experimental investigation into its accuracy bounds. An improved method to blend US information with the surface view is also presented with an experimental investigation into the accuracy of perception of the target locations in space

    Cable-driven parallel robot for transoral laser phonosurgery

    Get PDF
    Transoral laser phonosurgery (TLP) is a common surgical procedure in otolaryngology. Currently, two techniques are commonly used: free beam and fibre delivery. For free beam delivery, in combination with laser scanning techniques, accurate laser pattern scanning can be achieved. However, a line-of-sight to the target is required. A suspension laryngoscope is adopted to create a straight working channel for the scanning laser beam, which could introduce lesions to the patient, and the manipulability and ergonomics are poor. For the fibre delivery approach, a flexible fibre is used to transmit the laser beam, and the distal tip of the laser fibre can be manipulated by a flexible robotic tool. The issues related to the limitation of the line-of-sight can be avoided. However, the laser scanning function is currently lost in this approach, and the performance is inferior to that of the laser scanning technique in the free beam approach. A novel cable-driven parallel robot (CDPR), LaryngoTORS, has been developed for TLP. By using a curved laryngeal blade, a straight suspension laryngoscope will not be necessary to use, which is expected to be less traumatic to the patient. Semi-autonomous free path scanning can be executed, and high precision and high repeatability of the free path can be achieved. The performance has been verified in various bench and ex vivo tests. The technical feasibility of the LaryngoTORS robot for TLP was considered and evaluated in this thesis. The LaryngoTORS robot has demonstrated the potential to offer an acceptable and feasible solution to be used in real-world clinical applications of TLP. Furthermore, the LaryngoTORS robot can combine with fibre-based optical biopsy techniques. Experiments of probe-based confocal laser endomicroscopy (pCLE) and hyperspectral fibre-optic sensing were performed. The LaryngoTORS robot demonstrates the potential to be utilised to apply the fibre-based optical biopsy of the larynx.Open Acces

    Development of a handheld fiber-optic probe-based raman imaging instrumentation: raman chemlighter

    Get PDF
    Raman systems based on handheld fiber-optic probes offer advantages in terms of smaller sizes and easier access to the measurement sites, which are favorable for biomedical and clinical applications in the complex environment. However, there are several common drawbacks of applying probes for many applications: (1) The fixed working distance requires the user to maintain a certain working distance to acquire higher Raman signals; (2) The single-point-measurement ability restricts realizing a mapping or scanning procedure; (3) Lack of real-time data processing and a straightforward co-registering method to link the Raman information with the respective measurement position. The thesis proposed and experimentally demonstrated various approaches to overcome these drawbacks. A handheld fiber-optic Raman probe with an autofocus unit was presented to overcome the problem arising from using fixed-focus lenses, by using a liquid lens as the objective lens, which allows dynamical adjustment of the focal length of the probe. An implementation of a computer vision-based positional tracking to co-register the regular Raman spectroscopic measurements with the spatial location enables fast recording of a Raman image from a large tissue sample by combining positional tracking of the laser spot through brightfield images. The visualization of the Raman image has been extended to augmented and mixed reality and combined with a 3D reconstruction method and projector-based visualization to offer an intuitive and easily understandable way of presenting the Raman image. All these advances are substantial and highly beneficial to further drive the clinical translation of Raman spectroscopy as potential image-guided instrumentation

    A Review of Indocyanine Green Fluorescent Imaging in Surgery

    Get PDF
    The purpose of this paper is to give an overview of the recent surgical intraoperational applications of indocyanine green fluorescence imaging methods, the basics of the technology, and instrumentation used. Well over 200 papers describing this technique in clinical setting are reviewed. In addition to the surgical applications, other recent medical applications of ICG are briefly examined

    Live delivery of neurosurgical operating theater experience in virtual reality

    Get PDF
    A system for assisting in microneurosurgical training and for delivering interactive mixed reality surgical experience live was developed and experimented in hospital premises. An interactive experience from the neurosurgical operating theater was presented together with associated medical content on virtual reality eyewear of remote users. Details of the stereoscopic 360-degree capture, surgery imaging equipment, signal delivery, and display systems are presented, and the presence experience and the visual quality questionnaire results are discussed. The users reported positive scores on the questionnaire on topics related to the user experience achieved in the trial.Peer reviewe
    corecore