184 research outputs found
OPTICAL COHERENCE TOMOGRAPHY OPHTHALMIC SURGICAL GUIDANCE
Optical coherence tomography (OCT) performs high-resolution cross-sectional and volumetric tissue imaging in situ through the combination of confocal gating, coherence gating, and polarization gating. Because it is noninvasive, OCT has been used in multiple clinical applications such as tissue pathology assessment and interventional procedure guidance. Moreover, OCT can perform functional measurements such as phase-sensitive measurement of blood flow and polarization-sensitive measurement of tissue birefringence. These features made OCT one of the most widely used imaging systems in ophthalmology. In this thesis, we present several novel OCT methods developed for microsurgery guidance and OCT image analysis. The thesis mainly consists of five parts, which are shown as follows.
First, we present a BC-mode OCT image visualization method for microsurgery guidance, where multiple sparsely sampled B-scans are combined to generate a single cross-sectional image with an enhanced instrument and tissue layer visibility and reduced shadowing artifacts. The performance of the proposed method is demonstrated by guiding a 30-gauge needle into an ex-vivo human cornea.
Second, we present a microscope-integrated OCT guided robotic subretinal injection method. A workflow is designed for accurate and stable robotic needle navigation. The performance of the proposed method is demonstrated on ex-vivo porcine eye subretinal injection.
Third, we present optical flow OCT technique that quantifies accurate velocity fields. The accuracy of the proposed method is verified through phantom flow experiments by using a diluted milk powder solution as the scattering medium, in both cases of advective flow and turbulent flow.
Fourth, we present a wrapped Gaussian mixture model to stabilize the phase of swept source OCT systems. A closed-form iteration solution is derived using the expectation-maximization algorithm. The performance of the proposed method is demonstrated through ex-vivo, in-vivo, and flow phantom experiments. The results show its robustness in different application scenarios.
Fifth, we present a numerical landmark localization algorithm based on a convolutional neural network and a conditional random field. The robustness of the proposed method is demonstrated through ex-vivo porcine intestine landmark localization experiments
Hierarchical, informed and robust machine learning for surgical tool management
This thesis focuses on the development of a computer vision and deep learning based system for the intelligent management of surgical tools. The work accomplished included the development of a new dataset, creation of state of the art techniques to cope with volume, variety and vision problems, and designing or adapting algorithms to address specific surgical tool recognition issues. The system was trained to cope with a wide variety of tools, with very subtle differences in shapes, and was designed to work with high volumes, as well as varying illuminations and backgrounds. Methodology that was adopted in this thesis included the creation of a surgical tool image dataset and development of a surgical tool attribute matrix or knowledge-base. This was significant because there are no large scale publicly available surgical tool datasets, nor are there established annotations or datasets of textual descriptions of surgical tools that can be used for machine learning. The work resulted in the development of a new hierarchical architecture for multi-level predictions at surgical speciality, pack, set and tool level. Additional work evaluated the use of synthetic data to improve robustness of the CNN, and the infusion of knowledge to improve predictive performance
Brain Computations and Connectivity [2nd edition]
This is an open access title available under the terms of a CC BY-NC-ND 4.0 International licence. It is free to read on the Oxford Academic platform and offered as a free PDF download from OUP and selected open access locations.
Brain Computations and Connectivity is about how the brain works. In order to understand this, it is essential to know what is computed by different brain systems; and how the computations are performed.
The aim of this book is to elucidate what is computed in different brain systems; and to describe current biologically plausible computational approaches and models of how each of these brain systems computes.
Understanding the brain in this way has enormous potential for understanding ourselves better in health and in disease. Potential applications of this understanding are to the treatment of the brain in disease; and to artificial intelligence which will benefit from knowledge of how the brain performs many of its extraordinarily impressive functions.
This book is pioneering in taking this approach to brain function: to consider what is computed by many of our brain systems; and how it is computed, and updates by much new evidence including the connectivity of the human brain the earlier book: Rolls (2021) Brain Computations: What and How, Oxford University Press.
Brain Computations and Connectivity will be of interest to all scientists interested in brain function and how the brain works, whether they are from neuroscience, or from medical sciences including neurology and psychiatry, or from the area of computational science including machine learning and artificial intelligence, or from areas such as theoretical physics
Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
Cyber-Human Systems, Space Technologies, and Threats
CYBER-HUMAN SYSTEMS, SPACE TECHNOLOGIES, AND THREATS is our eighth textbook in a series covering the world of UASs / CUAS/ UUVs / SPACE. Other textbooks in our series are Space Systems Emerging Technologies and Operations; Drone Delivery of CBNRECy – DEW Weapons: Emerging Threats of Mini-Weapons of Mass Destruction and Disruption (WMDD); Disruptive Technologies with applications in Airline, Marine, Defense Industries; Unmanned Vehicle Systems & Operations On Air, Sea, Land; Counter Unmanned Aircraft Systems Technologies and Operations; Unmanned Aircraft Systems in the Cyber Domain: Protecting USA’s Advanced Air Assets, 2nd edition; and Unmanned Aircraft Systems (UAS) in the Cyber Domain Protecting USA’s Advanced Air Assets, 1st edition. Our previous seven titles have received considerable global recognition in the field. (Nichols & Carter, 2022) (Nichols, et al., 2021) (Nichols R. K., et al., 2020) (Nichols R. , et al., 2020) (Nichols R. , et al., 2019) (Nichols R. K., 2018) (Nichols R. K., et al., 2022)https://newprairiepress.org/ebooks/1052/thumbnail.jp
Efficient Semantic Segmentation for Resource-Constrained Applications with Lightweight Neural Networks
This thesis focuses on developing lightweight semantic segmentation models tailored for resource-constrained applications, effectively balancing accuracy and computational efficiency. It introduces several novel concepts, including knowledge sharing, dense bottleneck, and feature re-usability, which enhance the feature hierarchy by capturing fine-grained details, long-range dependencies, and diverse geometrical objects within the scene. To achieve precise object localization and improved semantic representations in real-time environments, the thesis introduces multi-stage feature aggregation, feature scaling, and hybrid-path attention methods
Semiautonomous Robotic Manipulator for Minimally Invasive Aortic Valve Replacement
Aortic valve surgery is the preferred procedure for replacing a damaged valve with an artificial one. The ValveTech robotic platform comprises a flexible articulated manipulator and surgical interface supporting the effective delivery of an artificial valve by teleoperation and endoscopic vision. This article presents our recent work on force-perceptive, safe, semiautonomous navigation of the ValveTech platform prior to valve implantation. First, we present a force observer that transfers forces from the manipulator body and tip to a haptic interface. Second, we demonstrate how hybrid forward/inverse mechanics, together with endoscopic visual servoing, lead to autonomous valve positioning. Benchtop experiments and an artificial phantom quantify the performance of the developed robot controller and navigator. Valves can be autonomously delivered with a 2.0±0.5 mm position error and a minimal misalignment of 3.4±0.9°. The hybrid force/shape observer (FSO) algorithm was able to predict distributed external forces on the articulated manipulator body with an average error of 0.09 N. FSO can also estimate loads on the tip with an average accuracy of 3.3%. The presented system can lead to better patient care, delivery outcome, and surgeon comfort during aortic valve surgery, without requiring sensorization of the robot tip, and therefore obviating miniaturization constraints.</p
Deep Learning Guided Autonomous Surgery: Guiding Small Needles into Sub-Millimeter Scale Blood Vessels
We propose a general strategy for autonomous guidance and insertion of a
needle into a retinal blood vessel. The main challenges underpinning this task
are the accurate placement of the needle-tip on the target vein and a careful
needle insertion maneuver to avoid double-puncturing the vein, while dealing
with challenging kinematic constraints and depth-estimation uncertainty.
Following how surgeons perform this task purely based on visual feedback, we
develop a system which relies solely on \emph{monocular} visual cues by
combining data-driven kinematic and contact estimation, visual-servoing, and
model-based optimal control. By relying on both known kinematic models, as well
as deep-learning based perception modules, the system can localize the surgical
needle tip and detect needle-tissue interactions and venipuncture events. The
outputs from these perception modules are then combined with a motion planning
framework that uses visual-servoing and optimal control to cannulate the target
vein, while respecting kinematic constraints that consider the safety of the
procedure. We demonstrate that we can reliably and consistently perform needle
insertion in the domain of retinal surgery, specifically in performing retinal
vein cannulation. Using cadaveric pig eyes, we demonstrate that our system can
navigate to target veins within 22 XY accuracy and perform the entire
procedure in less than 35 seconds on average, and all 24 trials performed on 4
pig eyes were successful. Preliminary comparison study against a human operator
show that our system is consistently more accurate and safer, especially during
safety-critical needle-tissue interactions. To the best of the authors'
knowledge, this work accomplishes a first demonstration of autonomous retinal
vein cannulation at a clinically-relevant setting using animal tissues
Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future
Control and Estimation Methods Towards Safe Robot-assisted Eye Surgery
Vitreoretinal surgery is among the most delicate surgical tasks in which physiological hand tremor may severely diminish surgeon performance and put the eye at high risk of injury. Unerring targeting accuracy is required to perform precise operations on micro-scale tissues. Tool tip to tissue interaction forces are usually below human tactile perception, which may result in exertion of excessive forces to the retinal tissue leading to irreversible damages. Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing and intelligent control algorithms of important surgery states and situations (e.g. instrument tip position measurements and control of interaction forces). This dissertation provides novel control and state estimation methods to improve safety during robot-assisted eye surgery.
The integration of robotics into retinal microsurgery leads to a reduction in surgeon perception of tool-to-tissue forces at sclera. This blunting of human tactile sensory input, which is due to the inflexible inertia of the robot, is a potential iatrogenic risk during robotic eye surgery. To address this issue, a sensorized surgical instrument equipped with Fiber Bragg Grating (FBG) sensors, which is capable of measuring the sclera forces and instrument insertion depth into the eye, is integrated to the Steady-Hand Eye Robot (SHER). An adaptive control scheme is then customized and implemented on the robot that is intended to autonomously mitigate the risk of unsafe scleral forces and excessive insertion of the instrument. Various preliminary and multi-user clinician studies are then conducted to evaluate the effectiveness of the control method during mock retinal surgery procedures.
In addition, due to inherent flexibility and the resulting deflection of eye surgical instruments as well as the need for targeting accuracy, we have developed a method to enhance deflected instrument tip position estimation. Using an iterative method and microscope data, we develop a calibration- and registration-independent (RI) framework to provide online estimates of the instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model for tip position evolution obtained based on the forward kinematics (FWK) of the robot and FBG sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the instrument tip position estimation during robotic surgery. The entire framework is independent of camera-to-robot coordinate frame registration and is evaluated during various phantom experiments to demonstrate its effectiveness
- …