89 research outputs found

    Towards Highly-Integrated Stereovideoscopy for \u3ci\u3ein vivo\u3c/i\u3e Surgical Robots

    Get PDF
    When compared to traditional surgery, laparoscopic procedures result in better patient outcomes: shorter recovery, reduced post-operative pain, and less trauma to incisioned tissue. Unfortunately, laparoscopic procedures require specialized training for surgeons, as these minimally-invasive procedures provide an operating environment that has limited dexterity and limited vision. Advanced surgical robotics platforms can make minimally-invasive techniques safer and easier for the surgeon to complete successfully. The most common type of surgical robotics platforms -- the laparoscopic robots -- accomplish this with multi-degree-of-freedom manipulators that are capable of a diversified set of movements when compared to traditional laparoscopic instruments. Also, these laparoscopic robots allow for advanced kinematic translation techniques that allow the surgeon to focus on the surgical site, while the robot calculates the best possible joint positions to complete any surgical motion. An important component of these systems is the endoscopic system used to transmit a live view of the surgical environment to the surgeon. Coupled with 3D high-definition endoscopic cameras, the entirety of the platform, in effect, eliminates the peculiarities associated with laparoscopic procedures, which allows less-skilled surgeons to complete minimally-invasive surgical procedures quickly and accurately. A much newer approach to performing minimally-invasive surgery is the idea of using in-vivo surgical robots -- small robots that are inserted directly into the patient through a single, small incision; once inside, an in-vivo robot can perform surgery at arbitrary positions, with a much wider range of motion. While laparoscopic robots can harness traditional endoscopic video solutions, these in-vivo robots require a fundamentally different video solution that is as flexible as possible and free of bulky cables or fiber optics. This requires a miniaturized videoscopy system that incorporates an image sensor with a transceiver; because of severe size constraints, this system should be deeply embedded into the robotics platform. Here, early results are presented from the integration of a miniature stereoscopic camera into an in-vivo surgical robotics platform. A 26mm X 24mm stereo camera was designed and manufactured. The proposed device features USB connectivity and 1280 X 720 resolution at 30 fps. Resolution testing indicates the device performs much better than similarly-priced analog cameras. Suitability of the platform for 3D computer vision tasks -- including stereo reconstruction -- is examined. The platform was also tested in a living porcine model at the University of Nebraska Medical Center. Results from this experiment suggest that while the platform performs well in controlled, static environments, further work is required to obtain usable results in true surgeries. Concluding, several ideas for improvement are presented, along with a discussion of core challenges associated with the platform. Adviser: Lance C. PĂ©rez [Document = 28 Mb

    Vision Sensors and Edge Detection

    Get PDF
    Vision Sensors and Edge Detection book reflects a selection of recent developments within the area of vision sensors and edge detection. There are two sections in this book. The first section presents vision sensors with applications to panoramic vision sensors, wireless vision sensors, and automated vision sensor inspection, and the second one shows image processing techniques, such as, image measurements, image transformations, filtering, and parallel computing

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Real-time tissue viability assessment using near-infrared light

    Get PDF
    Despite significant advances in medical imaging technologies, there currently exist no tools to effectively assist healthcare professionals during surgical procedures. In turn, procedures remain subjective and dependent on experience, resulting in avoidable failure and significant quality of care disparities across hospitals. Optical techniques are gaining popularity in clinical research because they are low cost, non-invasive, portable, and can retrieve both fluorescence and endogenous contrast information, providing physiological information relative to perfusion, oxygenation, metabolism, hydration, and sub-cellular content. Near-infrared (NIR) light is especially well suited for biological tissue and does not cause tissue damage from ionizing radiation or heat. My dissertation has been focused on developing rapid imaging techniques for mapping endogenous tissue constituents to aid surgical guidance. These techniques allow, for the first time, video-rate quantitative acquisition over a large field of view (> 100 cm2) in widefield and endoscopic implementations. The optical system analysis has been focused on the spatial-frequency domain for its ease of quantitative measurements over large fields of view and for its recent development in real-time acquisition, single snapshot of optical properties (SSOP) imaging. Using these methods, this dissertation provides novel improvements and implementations to SSOP, including both widefield and endoscopic instrumentations capable of video-rate acquisition of optical properties and sample surface profile maps. In turn, these measures generate profile-corrected maps of hemoglobin concentration that are highly beneficial for perfusion and overall tissue viability. Also utilizing optical property maps, a novel technique for quantitative fluorescence imaging was also demonstrated, showing large improvement over standard and ratiometric methods. To enable real-time feedback, rapid processing algorithms were designed using lookup tables that provide a 100x improvement in processing speed. Finally, these techniques were demonstrated in vivo to investigate their ability for early detection of tissue failure due to ischemia. Both pre-clinical studies show endogenous contrast imaging can provide early measures of future tissue viability. The goal of this work has been to provide the foundation for real-time imaging systems that provide tissue constituent quantification for tissue viability assessments.2018-01-09T00:00:00

    Platforms for prototyping minimally invasive instruments

    Get PDF
    The introduction of new technologies in medicine is often an issue because there are many stages to go through, from the idea to the approval by ethical committees and mass production. This work covers the first steps of the development of a medical device, dealing with the tools that can help to reduce the time for producing the laboratory prototype. These tools can involve electronics and software for the creation of a “universal”' hardware platform that can be used for many robotic applications, adapting only few components for the specific scenario. The platform is created by setting up a traditional computer with operating system and acquisition channels aimed at opening the system toward the real environment. On this platform algorithms can be implemented rapidly, allowing to assess the feasibility of an idea. This approach lets the designer concentrate on the application rather than on the selection of the appropriate hardware electronics every time that a new project starts. In the first part an overview of the existing instruments for minimally invasive interventions that can be found as commercial or research products is given. An introduction related to hardware electronics is presented with the requirements and the specific characteristics needed for a robotic application. The second part focuses on specific projects in MIS. The first project concerns the study and the development of a lightweight hand-held robotic instrument for laparoscopy. Motivations are related to the lack of dexterous hand-held laparoscopic instruments. The second project concerns the study and the presentation of a prototype of a robotic endoscope with enhanced resolution. The third project concerns the development of a system able to detect the inspiration and the expiration phases. The aim is to evaluate the weariness of the surgeon, since breathing can be related to fatigue

    Bio-Inspired Multi-Spectral and Polarization Imaging Sensors for Image-Guided Surgery

    Get PDF
    Image-guided surgery (IGS) can enhance cancer treatment by decreasing, and ideally eliminating, positive tumor margins and iatrogenic damage to healthy tissue. Current state-of-the-art near-infrared fluorescence imaging systems are bulky, costly, lack sensitivity under surgical illumination, and lack co-registration accuracy between multimodal images. As a result, an overwhelming majority of physicians still rely on their unaided eyes and palpation as the primary sensing modalities to distinguish cancerous from healthy tissue. In my thesis, I have addressed these challenges in IGC by mimicking the visual systems of several animals to construct low power, compact and highly sensitive multi-spectral and color-polarization sensors. I have realized single-chip multi-spectral imagers with 1000-fold higher sensitivity and 7-fold better spatial co-registration accuracy compared to clinical imaging systems in current use by monolithically integrating spectral tapetal and polarization filters with an array of vertically stacked photodetectors. These imaging sensors yield the unique capabilities of imaging simultaneously color, polarization, and multiple fluorophores for near-infrared fluorescence imaging. Preclinical and clinical data demonstrate seamless integration of this technologies in the surgical work flow while providing surgeons with real-time information on the location of cancerous tissue and sentinel lymph nodes, respectively. Due to its low cost, the bio-inspired sensors will provide resource-limited hospitals with much-needed technology to enable more accurate value-based health care

    An Improved NMS-Based Adaptive Edge Detection Method and Its FPGA Implementation

    Get PDF
    For improving the processing speed and accuracy of edge detection, an adaptive edge detection method based on improved NMS (nonmaximum suppression) was proposed in this paper. In the method, the gradient image was computed by four directional Sobel operators. Then, the gradient image was processed by using NMS method. By defining a power map function, the elements values of gradient image histogram were mapped into a wider value range. By calculating the maximal between-class variance according to the mapped histogram, the corresponding threshold was obtained as adaptive threshold value in edge detection. Finally, to be convenient for engineering application, the proposed method was realized in FPGA (Field Programmable Gate Array). The experiment results demonstrated that the proposed method was effective in edge detection and suitable for real-time application

    Programmable Spectral Source and Design Tool for 3D Imaging Using Complementary Bandpass Filters

    Get PDF
    An endoscopic illumination system for illuminating a subject for stereoscopic image capture, includes a light source which outputs light; a first complementary multiband bandpass filter (CMBF) and a second CMBF, the first and second CMBFs being situated in first and second light paths, respectively, where the first CMBF and the second CMBF filter the light incident thereupon to output filtered light; and a camera which captures video images of the subject and generates corresponding video information, the camera receiving light reflected from the subject and passing through a pupil CMBF pair and a detection lens. The pupil CMBF includes a first pupil CMBF and a second pupil CMBF, the first pupil CMBF being identical to the first CMBF and the second pupil CMBF being identical to the second CMBF, and the detection lens includes one unpartitioned section that covers both the first pupil CMBF and the second pupil CMBF

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

    Medical Robotics for use in MRI Guided Endoscopy

    Get PDF
    Interventional Magnetic Resonance Imaging (MRI) is a developing field that aims to provide intra-operative MRI to a clinician to guide diagnostic or therapeutic medical procedures. MRI provides excellent soft tissue contrast at sub-millimetre resolution in both 2D and 3D without the need for ionizing radiation. Images can be acquired in near real-time for guidance purposes. Operating in the MR environment brings challenges due to the high static magnetic field, switching magnetic field gradients and RF excitation pulses. In addition high field closed bore scanners have spatial constraints that severely limit access to the patient. This thesis presents a system for MRI-guided Endoscopic Retrograde Cholangio-pancreatography (ERCP). This includes a remote actuation system that enables an MRI-compatible endoscope to be controlled whilst the patient is inside the MRI scanner, overcoming the spatial and procedural constraints imposed by the closed scanner bore. The modular system utilises non-magnetic ultrasonic motors and is designed for image-guided user-in-the-loop control. A novel miniature MRI compatible clutch has been incorporated into the design to reduce the need for multiple parallel motors. The actuation system is MRI compatible does not degrade the MR images below acceptable levels. User testing showed that the actuation system requires some degree of training but enables completion of a simulated ERCP procedure with no loss of performance. This was demonstrated using a tailored ERCP simulator and kinematic assessment tool, which was validated with users from a range of skill levels to ensure that it provides an objective measurement of endoscopic skill. Methods of tracking the endoscope in real-time using the MRI scanner are explored and presented here. Use of the MRI-guided ERCP system was shown to improve the operator’s ability to position the endoscope in an experimental environment compared with a standard fluoroscopic-guided system.Open Acces
    • …
    corecore