422 research outputs found

    Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach

    Get PDF
    Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft

    Real-Time Graphic and Haptic Simulation of Deformable Tissue Puncture

    Get PDF
    A myriad of surgical tasks rely on puncturing tissue membranes (Fig. 1) and cutting through tissue mass. Properly training a practitioner for such tasks requires a simulator that can display both the graphical changes and the haptic forces of these deformations, punctures, and cutting actions. This paper documents our work to create a simulator that can model these effects in real time. Generating graphic and haptic output necessitates the use of a predictive model to track the tissue’s physical state. Many finite element methods (FEM) exist for computing tissue deformation ([1],[4]). These methods often obtain accurate results, but they can be computationally intensive for complex models. Real-time tasks using this approach are often limited in their complexity and workspace domain due to the large computational overhead of FEM. The computer graphics community has developed a large range of methods for modeling deformable media [5], often trading complete physical accuracy for computational speedup. Casson and Laugier [3] outline a mass-spring mesh model based on these principles, but they do not explore its usage with haptic interaction. Gerovich et al. [2] detail a set of haptic interaction rules (Fig. 2) for one dimensional simulation of multi-layer deformable tissue, but they do not provide strategies for integrating this model with realistic graphic feedback

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Deformable Object Modelling Through Cellular Neural Network

    Get PDF
    This paper presents a new methodology for thedeformable object modelling by drawing an analogybetween cellular neural network (CNN) and elasticdeformation. The potential energy stored in an elasticbody as a result of a deformation caused by an externalforce is propagated among mass points by the non-linearCNN activity. An improved autonomous CNN model isdeveloped for propagating the energy generated by theexternal force on the object surface in the naturalmanner of heat conduction. A heat flux based method ispresented to derive the internal forces from the potentialenergy distribution established by the CNN. Theproposed methodology models non-linear materials withnon-linear CNN rather than geometric non-linearity inthe most existing deformation methods. It can not onlydeal with large-range deformations due to the localconnectivity of cells and the CNN dynamics, but it canalso accommodate both isotropic and anisotropicmaterials by simply modifying conductivity constants.Examples are presented tThis paper presents a new methodology for the deformable object modelling by drawing an analogy between cellular neural network (CNN) and elastic deformation. The potential energy stored in an elastic body as a result of a deformation caused by an external force is propagated among mass points by the non-linear CNN activity. An improved autonomous CNN model is developed for propagating the energy generated by the external force on the object surface in the natural manner of heat conduction. A heat flux based method is presented to derive the internal forces from the potential energy distribution established by the CNN. The proposed methodology models non-linear materials with non-linear CNN rather than geometric non-linearity in the most existing deformation methods. It can not only deal with large-range deformations due to the local connectivity of cells and the CNN dynamics, but it can also accommodate both isotropic and anisotropic materials by simply modifying conductivity constants. Examples are presented to demonstrate the efficacy of the proposed methodology

    Generalized God-Objects: a Paradigm for Interacting with Physically-Based Virtual World

    Get PDF
    International audienceIn this paper, we show a method to interact with physically-based environments in a way which guarantee their integrity whatever the mechanical properties of the virtual interaction tool and the control device. It consists in an extension of the god-object concept. The interaction tools are modeled as physical bodies which tend to reach, if possible, the position maintained by the user. Their behavior is computed via the dynamic laws of motion by the simulation engine, as the other bodies in the scene. The cases of articulated rigid bodies and deformable bodies are studied. This mechanism also provides a unified framework which allows the control of virtual objects via devices providing force feedback or not. Finally, some applications including virtual surgery are shown to illustrate the effectiveness of the approach

    Collision Detection and Merging of Deformable B-Spline Surfaces in Virtual Reality Environment

    Get PDF
    This thesis presents a computational framework for representing, manipulating and merging rigid and deformable freeform objects in virtual reality (VR) environment. The core algorithms for collision detection, merging, and physics-based modeling used within this framework assume that all 3D deformable objects are B-spline surfaces. The interactive design tool can be represented as a B-spline surface, an implicit surface or a point, to allow the user a variety of rigid or deformable tools. The collision detection system utilizes the fact that the blending matrices used to discretize the B-spline surface are independent of the position of the control points and, therefore, can be pre-calculated. Complex B-spline surfaces can be generated by merging various B-spline surface patches using the B-spline surface patches merging algorithm presented in this thesis. Finally, the physics-based modeling system uses the mass-spring representation to determine the deformation and the reaction force values provided to the user. This helps to simulate realistic material behaviour of the model and assist the user in validating the design before performing extensive product detailing or finite element analysis using commercially available CAD software. The novelty of the proposed method stems from the pre-calculated blending matrices used to generate the points for graphical rendering, collision detection, merging of B-spline patches, and nodes for the mass spring system. This approach reduces computational time by avoiding the need to solve complex equations for blending functions of B-splines and perform the inversion of large matrices. This alternative approach to the mechanical concept design will also help to do away with the need to build prototypes for conceptualization and preliminary validation of the idea thereby reducing the time and cost of concept design phase and the wastage of resources

    Wearable Vibrotactile Haptic Device for Stiffness Discrimination during Virtual Interactions

    Get PDF
    In this paper, we discuss the development of cost effective, wireless, and wearable vibrotactile haptic device for stiffness perception during an interaction with virtual objects. Our experimental setup consists of haptic device with five vibrotactile actuators, virtual reality environment tailored in Unity 3D integrating the Oculus Rift Head Mounted Display (HMD) and the Leap Motion controller. The virtual environment is able to capture touch inputs from users. Interaction forces are then rendered at 500 Hz and fed back to the wearable setup stimulating fingertips with ERM vibrotactile actuators. Amplitude and frequency of vibrations are modulated proportionally to the interaction force to simulate the stiffness of a virtual object. A quantitative and qualitative study is done to compare the discrimination of stiffness on virtual linear spring in three sensory modalities: visual only feedback, tactile only feedback, and their combination. A common psychophysics method called the Two Alternative Forced Choice (2AFC) approach is used for quantitative analysis using Just Noticeable Difference (JND) and Weber Fractions (WF). According to the psychometric experiment result, average Weber fraction values of 0.39 for visual only feedback was improved to 0.25 by adding the tactile feedback

    Real-time hybrid cutting with dynamic fluid visualization for virtual surgery

    Get PDF
    It is widely accepted that a reform in medical teaching must be made to meet today's high volume training requirements. Virtual simulation offers a potential method of providing such trainings and some current medical training simulations integrate haptic and visual feedback to enhance procedure learning. The purpose of this project is to explore the capability of Virtual Reality (VR) technology to develop a training simulator for surgical cutting and bleeding in a general surgery

    Patient-specific simulation environment for surgical planning and preoperative rehearsal

    Get PDF
    Surgical simulation is common practice in the fields of surgical education and training. Numerous surgical simulators are available from commercial and academic organisations for the generic modelling of surgical tasks. However, a simulation platform is still yet to be found that fulfils the key requirements expected for patient-specific surgical simulation of soft tissue, with an effective translation into clinical practice. Patient-specific modelling is possible, but to date has been time-consuming, and consequently costly, because data preparation can be technically demanding. This motivated the research developed herein, which addresses the main challenges of biomechanical modelling for patient-specific surgical simulation. A novel implementation of soft tissue deformation and estimation of the patient-specific intraoperative environment is achieved using a position-based dynamics approach. This modelling approach overcomes the limitations derived from traditional physically-based approaches, by providing a simulation for patient-specific models with visual and physical accuracy, stability and real-time interaction. As a geometrically- based method, a calibration of the simulation parameters is performed and the simulation framework is successfully validated through experimental studies. The capabilities of the simulation platform are demonstrated by the integration of different surgical planning applications that are found relevant in the context of kidney cancer surgery. The simulation of pneumoperitoneum facilitates trocar placement planning and intraoperative surgical navigation. The implementation of deformable ultrasound simulation can assist surgeons in improving their scanning technique and definition of an optimal procedural strategy. Furthermore, the simulation framework has the potential to support the development and assessment of hypotheses that cannot be tested in vivo. Specifically, the evaluation of feedback modalities, as a response to user-model interaction, demonstrates improved performance and justifies the need to integrate a feedback framework in the robot-assisted surgical setting.Open Acces
    • …
    corecore