42 research outputs found

    Optical coherence tomography-based consensus definition for lamellar macular hole.

    Get PDF
    BackgroundA consensus on an optical coherence tomography definition of lamellar macular hole (LMH) and similar conditions is needed.MethodsThe panel reviewed relevant peer-reviewed literature to reach an accord on LMH definition and to differentiate LMH from other similar conditions.ResultsThe panel reached a consensus on the definition of three clinical entities: LMH, epiretinal membrane (ERM) foveoschisis and macular pseudohole (MPH). LMH definition is based on three mandatory criteria and three optional anatomical features. The three mandatory criteria are the presence of irregular foveal contour, the presence of a foveal cavity with undermined edges and the apparent loss of foveal tissue. Optional anatomical features include the presence of epiretinal proliferation, the presence of a central foveal bump and the disruption of the ellipsoid zone. ERM foveoschisis definition is based on two mandatory criteria: the presence of ERM and the presence of schisis at the level of Henle's fibre layer. Three optional anatomical features can also be present: the presence of microcystoid spaces in the inner nuclear layer (INL), an increase of retinal thickness and the presence of retinal wrinkling. MPH definition is based on three mandatory criteria and two optional anatomical features. Mandatory criteria include the presence of a foveal sparing ERM, the presence of a steepened foveal profile and an increased central retinal thickness. Optional anatomical features are the presence of microcystoid spaces in the INL and a normal retinal thickness.ConclusionsThe use of the proposed definitions may provide uniform language for clinicians and future research

    Doctor of Philosophy

    Get PDF
    dissertationIn this dissertation, we present methods for intuitive telemanipulation of manipulators that use piezoelectric stick-slip actuators (PSSAs). Commercial micro/nano-manipulators, which utilize PSSAs to achieve high precision over a large workspace, are typically controlled by a human operator at the joint level, leading to unintuitive and time-consuming telemanipulation. Prior work has considered the use of computer-vision-feedback to close a control loop for improved performance, but computer-vision-feedback is not a viable option for many end users. We discuss how open-loop models of the micro/nano-manipulator can be used to achieve desired end-effector movements, and we explain the process of obtaining open-loop models. We propose a rate-control telemanipulation method that utilizes the obtained model, and we experimentally quantify the effectiveness of the method using a common commercial manipulator (the Kleindiek MM3A). The utility of open-loop control methods for PSSAs with a human in the loop depends directly on the accuracy of the open-loop models of the manipulator. Prior research has shown that modeling of piezoelectric actuators is not a trivial task as they are known to suffer from nonlinearities that degrade their performance. We study the effect of static (non-inertial) loads on a prismatic and a rotary PSSA, and obtain a model relating the step size of the actuator to the load. The actuator-specific parameters of the model are calibrated by taking measurements in specific configurations of the manipulator. Results comparing the obtained model to experimental data are presented. PSSAs have properties that make them desirable over traditional DC-motor actuators for use in retinal surgery. We present a telemanipulation system for retinal surgery that uses a full range of existing disposable instruments. The system uses a PSSA-based manipulator that is compact and light enough that it could reasonably be made head-mounted to passively compensate for head movements. Two mechanisms are presented that enable the system to use existing disposable actuated instruments, and an instrument adapter enables quick-change of instruments during surgery. A custom stylus for a haptic interface enables intuitive and ergonomic telemanipulation of actuated instruments. Experimental results with a force-sensitive phantom eye show that telemanipulated surgery results in reduced forces on the retina compared to manual surgery, and training with the system results in improved performance. Finally, we evaluate operator efficiency with different haptic-interface kinematics for telemanipulated retinal surgery. Surgical procedures of the retina require precise manipulation of instruments inserted through trocars in the sclera. Telemanipulated robotic systems have been developed to improve retinal surgery, but there is not a unique mapping of the motions of the surgeon's hand to the lower-dimensional motions of the instrument through the trocar. We study operator performance during a precision positioning task on a force-sensing phantom retina, reminiscent of telemanipulated retinal surgery, with three common haptic-interface kinematics implemented in software on a PHANTOM Premium 6DOF haptic interface. Results from a study with 12 human subjects show that overall performance is best with the kinematics that represent a compact and inexpensive option, and that subjects' subjective preference agrees with the objective performance results

    The benefits of haptic feedback in robot assisted surgery and their moderators: a metaanalysis

    Get PDF
    Robot assisted surgery (RAS) provides medical practitioners with valuable tools, decreasing strain during surgery and leading to better patient outcomes. While the loss of haptic sensation is a commonly cited disadvantage of RAS, new systems aim to address this problem by providing artificial haptic feedback. N = 56 papers that compared robotic surgery systems with and without haptic feedback were analyzed to quantify the performance benefits of restoring the haptic modality. Additionally, this study identifies factors moderating the effect of restoring haptic sensation. Overall results showed haptic feedback was effective in reducing average forces (Hedges' g = 0.83) and peak forces (Hedges' g = 0.69) applied during surgery, as well as reducing the completion time (Hedges' g = 0.83). Haptic feedback has also been found to lead to higher accuracy (Hedges' g = 1.50) and success rates (Hedges' g = 0.80) during surgical tasks. Effect sizes on several measures varied between tasks, the type of provided feedback, and the subjects' levels of surgical expertise, with higher levels of expertise generally associated with smaller effect sizes. No significant differences were found between virtual fixtures and rendering contact forces. Implications for future research are discussed

    The Hand-Held Force Magnifier: Surgical Tools to Augment the Sense of Touch

    Get PDF
    Modern surgeons routinely perform procedures with noisy, sub-threshold, or obscured visual and haptic feedback,either due to the necessary approach, or because the systems on which they are operating are exceeding delicate. For example, in cataract extraction, ophthalmic surgeons must peel away thin membranes in order to access and replace the lens of the eye. Elsewhere, dissection is now commonly performed with energy-delivering tools – rather than sharp blades – and damage to deep structures is possible if tissue contact is not well controlled. Surgeons compensate for their lack of tactile sensibility by relying solely on visual feedback, observing tissue deformation and other visual cues through surgical microscopes or cameras. Using visual information alone can make a procedure more difficult, because cognitive mediation is required to convert visual feedback into motor action. We call this the “haptic problem” in surgery because the human sensorimotor loop is deprived of critical tactile afferent information, increasing the chance for intraoperative injury and requiring extensive training before clinicians reach independent proficiency. Tools that enhance the surgeon’s direct perception of tool-tissue forces can therefore potentially reduce the risk of iatrogenic complications and improve patient outcomes. Towards this end, we have developed and characterized a new robotic surgical tool, the Hand-Held Force Magnifier (HHFM), which amplifies forces at the tool tip so they may be readily perceived by the user, a paradigm we call “in-situ” force feedback. In this dissertation, we describe the development of successive generations of HHFM prototypes, and the evaluation of a proposed human-in-the-loop control framework using the methods of psychophysics. Using these techniques, we have verified that our tool can reduce sensory perception thresholds, augmenting the user’s abilities beyond what is normally possible. Further, we have created models of human motor control in surgically relevant tasks such as membrane puncture, which have shown to be sensitive to push-pull direction and handedness effects. Force augmentation has also demonstrated improvements to force control in isometric force generation tasks. Finally, in support of future psychophysics work, we have developed an inexpensive, high-bandwidth, single axis haptic renderer using a commercial audio speaker

    Image-Based Force Estimation and Haptic Rendering For Robot-Assisted Cardiovascular Intervention

    Get PDF
    Clinical studies have indicated that the loss of haptic perception is the prime limitation of robot-assisted cardiovascular intervention technology, hindering its global adoption. It causes compromised situational awareness for the surgeon during the intervention and may lead to health risks for the patients. This doctoral research was aimed at developing technology for addressing the limitation of the robot-assisted intervention technology in the provision of haptic feedback. The literature review showed that sensor-free force estimation (haptic cue) on endovascular devices, intuitive surgeon interface design, and haptic rendering within the surgeon interface were the major knowledge gaps. For sensor-free force estimation, first, an image-based force estimation methods based on inverse finite-element methods (iFEM) was developed and validated. Next, to address the limitation of the iFEM method in real-time performance, an inverse Cosserat rod model (iCORD) with a computationally efficient solution for endovascular devices was developed and validated. Afterward, the iCORD was adopted for analytical tip force estimation on steerable catheters. The experimental studies confirmed the accuracy and real-time performance of the iCORD for sensor-free force estimation. Afterward, a wearable drift-free rotation measurement device (MiCarp) was developed to facilitate the design of an intuitive surgeon interface by decoupling the rotation measurement from the insertion measurement. The validation studies showed that MiCarp had a superior performance for spatial rotation measurement compared to other modalities. In the end, a novel haptic feedback system based on smart magnetoelastic elastomers was developed, analytically modeled, and experimentally validated. The proposed haptics-enabled surgeon module had an unbounded workspace for interventional tasks and provided an intuitive interface. Experimental validation, at component and system levels, confirmed the usability of the proposed methods for robot-assisted intervention systems

    Investigation of a holistic human-computer interaction (HCI) framework to support the design of extended reality (XR) based training simulators

    Get PDF
    In recent years, the use of Extended Reality (XR) based simulators for training has increased rapidly. In this context, there is a need to explore novel HCI-based approaches to design more effective 3D training environments. A major impediment in this research area is the lack of an HCI-based framework that is holistic and serves as a foundation to integrate the design and assessment of HCI-based attributes such as affordance, cognitive load, and user-friendliness. This research addresses this need by investigating the creation of a holistic framework along with a process for designing, building, and assessing training simulators using such a framework as a foundation. The core elements of the proposed framework include the adoption of participatory design principles, the creation of information-intensive process models of target processes (relevant to the training activities), and design attributes related to affordance and cognitive load. A new attribute related to affordance of 3D scenes is proposed (termed dynamic affordance) and its role in impacting user comprehension in data-rich 3D training environments is studied. The framework is presented for the domain of orthopedic surgery. Rigorous user-involved assessment of the framework and simulation approach has highlighted the positive impact of the HCI-based framework and attributes on the acquisition of skills and knowledge by healthcare users

    Virtual reality training for micro-robotic cell injection

    Full text link
    This research was carried out to fill the gap within existing knowledge on the approaches to supplement the training for micro-robotic cell injection procedure by utilising virtual reality and haptic technologies

    Factors of Micromanipulation Accuracy and Learning

    No full text
    Micromanipulation refers to the manipulation under a microscope in order to perform delicate procedures. It is difficult for humans to manipulate objects accurately under a microscope due to tremor and imperfect perception, limiting performance. This project seeks to understand factors affecting accuracy in micromanipulation, and to propose strategies for learning improving accuracy. Psychomotor experiments were conducted using computer-controlled setups to determine how various feedback modalities and learning methods can influence micromanipulation performance. In a first experiment, static and motion accuracy of surgeons, medical students and non-medical students under different magniification levels and grip force settings were compared. A second experiment investigated whether the non-dominant hand placed close to the target can contribute to accurate pointing of the dominant hand. A third experiment tested a training strategy for micromanipulation using unstable dynamics to magnify motion error, a strategy shown to be decreasing deviation in large arm movements. Two virtual reality (VR) modules were then developed to train needle grasping and needle insertion tasks, two primitive tasks in a microsurgery suturing procedure. The modules provided the trainee with a visual display in stereoscopic view and information on their grip, tool position and angles. Using the VR module, a study examining effects of visual cues was conducted to train tool orientation. Results from these studies suggested that it is possible to learn and improve accuracy in micromanipulation using appropriate sensorimotor feedback and training

    Digital twins: a survey on enabling technologies, challenges, trends and future prospects

    Get PDF
    Digital Twin (DT) is an emerging technology surrounded by many promises, and potentials to reshape the future of industries and society overall. A DT is a system-of-systems which goes far beyond the traditional computer-based simulations and analysis. It is a replication of all the elements, processes, dynamics, and firmware of a physical system into a digital counterpart. The two systems (physical and digital) exist side by side, sharing all the inputs and operations using real-time data communications and information transfer. With the incorporation of Internet of Things (IoT), Artificial Intelligence (AI), 3D models, next generation mobile communications (5G/6G), Augmented Reality (AR), Virtual Reality (VR), distributed computing, Transfer Learning (TL), and electronic sensors, the digital/virtual counterpart of the real-world system is able to provide seamless monitoring, analysis, evaluation and predictions. The DT offers a platform for the testing and analysing of complex systems, which would be impossible in traditional simulations and modular evaluations. However, the development of this technology faces many challenges including the complexities in effective communication and data accumulation, data unavailability to train Machine Learning (ML) models, lack of processing power to support high fidelity twins, the high need for interdisciplinary collaboration, and the absence of standardized development methodologies and validation measures. Being in the early stages of development, DTs lack sufficient documentation. In this context, this survey paper aims to cover the important aspects in realization of the technology. The key enabling technologies, challenges and prospects of DTs are highlighted. The paper provides a deep insight into the technology, lists design goals and objectives, highlights design challenges and limitations across industries, discusses research and commercial developments, provides its applications and use cases, offers case studies in industry, infrastructure and healthcare, lists main service providers and stakeholders, and covers developments to date, as well as viable research dimensions for future developments in DTs

    Using High-Level Processing of Low-Level Signals to Actively Assist Surgeons with Intelligent Surgical Robots

    Get PDF
    Robotic surgical systems are increasingly used for minimally-invasive surgeries. As such, there is opportunity for these systems to fundamentally change the way surgeries are performed by becoming intelligent assistants rather than simply acting as the extension of surgeons' arms. As a step towards intelligent assistance, this thesis looks at ways to represent different aspects of robot-assisted surgery (RAS). We identify three main components: the robot, the surgeon actions, and the patient scene dynamics. Traditional learning algorithms in these domains are predominantly supervised methods. This has several drawbacks. First many of these domains are non-categorical, like how soft-tissue deforms. This makes labeling difficult. Second, surgeries vary greatly. Estimation of the robot state may be affected by how the robot is docked and cable tensions in the instruments. Estimation of the patient anatomy and its dynamics are often inaccurate, and in any case, may change throughout a surgery. To obtain the most accurate information, these aspects must be learned during the procedure. This limits the amount of labeling that could be done. On the surgeon side, different surgeons may perform the same procedure differently and the algorithm should provide personalized estimations for surgeons. All of these considerations motivated the use of self-supervised learning throughout this thesis. We first build a representation of the robot system. In particular, we looked at learning the dynamics model of the robot. We evaluate the model by using it to estimate forces. Once we can estimate forces in free space, we extend the algorithm to take into account patient-specific interactions, namely with the trocar and the cannula seal. Accounting for surgery-specific interactions is possible because our method does not require additional sensors and can be trained in less than five minutes, including time for data collection. Next, we use cross-modal training to understand surgeon actions by looking at the bottleneck layer when mapping video to kinematics. This should contain information about the latent space of surgeon-actions, while discarding some medium-specific information about either the video or the kinematics. Lastly, to understand the patient scene, we start with modeling interactions between a robot instrument and a soft-tissue phantom. Models are often inaccurate due to imprecise material parameters and boundary conditions, particularly in clinical scenarios. Therefore, we add a depth camera to observe deformations to correct the results of simulations. We also introduce a network that learns to simulate soft-tissue deformation from physics simulators in order to speed up the estimation. We demonstrate that self-supervised learning can be used for understanding each part of RAS. The representations it learns contain information about signals that are not directly measurable. The self-supervised nature of the methods presented in this thesis lends itself well to learning throughout a surgery. With such frameworks, we can overcome some of the main barriers to adopting learning methods in the operating room: the variety in surgery and the difficulty in labeling enough training data for each case
    corecore