145 research outputs found

    Virtual reality surgery simulation: A survey on patient specific solution

    Get PDF
    For surgeons, the precise anatomy structure and its dynamics are important in the surgery interaction, which is critical for generating the immersive experience in VR based surgical training applications. Presently, a normal therapeutic scheme might not be able to be straightforwardly applied to a specific patient, because the diagnostic results are based on averages, which result in a rough solution. Patient Specific Modeling (PSM), using patient-specific medical image data (e.g. CT, MRI, or Ultrasound), could deliver a computational anatomical model. It provides the potential for surgeons to practice the operation procedures for a particular patient, which will improve the accuracy of diagnosis and treatment, thus enhance the prophetic ability of VR simulation framework and raise the patient care. This paper presents a general review based on existing literature of patient specific surgical simulation on data acquisition, medical image segmentation, computational mesh generation, and soft tissue real time simulation

    Planning Framework for Robotic Pizza Dough Stretching with a Rolling Pin

    Get PDF
    Stretching a pizza dough with a rolling pin is a nonprehensile manipulation. Since the object is deformable, force closure cannot be established, and the manipulation is carried out in a nonprehensile way. The framework of this pizza dough stretching application that is explained in this chapter consists of four sub-procedures: (i) recognition of the pizza dough on a plate, (ii) planning the necessary steps to shape the pizza dough to the desired form, (iii) path generation for a rolling pin to execute the output of the pizza dough planner, and (iv) inverse kinematics for the bi-manual robot to grasp and control the rolling pin properly. Using the deformable object model described in Chap. 3, each sub-procedure of the proposed framework is explained sequentially

    Simulation Guidée par l’Image pour la Réalité Augmentée durant la Chirurgie Hépatique

    Get PDF
    The main objective of this thesis is to provide surgeons with tools for pre and intra-operative decision support during minimally invasive hepaticsurgery. These interventions are usually based on laparoscopic techniques or, more recently, flexible endoscopy. During such operations, the surgeon tries to remove a significant number of liver tumors while preserving the functional role of the liver. This involves defining an optimal hepatectomy, i.e. ensuring that the volume of post-operative liver is at least at 55% of the original liver and the preserving at hepatic vasculature. Although intervention planning can now be considered on the basis of preoperative patient-specific, significant movements of the liver and its deformations during surgery data make this very difficult to use planning in practice. The work proposed in this thesis aims to provide augmented reality tools to be used in intra-operative conditions in order to visualize the position of tumors and hepatic vascular networks at any time.L’objectif principal de cette thèse est de fournir aux chirurgiens des outils d’aide à la décision pré et per-opératoire lors d’interventions minimalement invasives en chirurgie hépatique. Ces interventions reposent en général sur des techniques de laparoscopie ou plus récemment d’endoscopie flexible. Lors de telles interventions, le chirurgien cherche à retirer un nombre souvent important de tumeurs hépatiques, tout en préservant le rôle fonctionnel du foie. Cela implique de définir une hépatectomie optimale, c’est à dire garantissant un volume du foie post-opératoire d’au moins 55% du foie initial et préservant au mieux la vascularisation hépatique. Bien qu’une planification de l’intervention puisse actuellement s’envisager sur la base de données pré-opératoire spécifiques au patient, les mouvements importants du foie et ses déformations lors de l’intervention rendent cette planification très difficile à exploiter en pratique. Les travaux proposés dans cette thèse visent à fournir des outils de réalité augmentée utilisables en conditions per-opératoires et permettant de visualiser à chaque instant la position des tumeurs et réseaux vasculaires hépatiques

    Real-Time Collision Detection for Deformable Characters with Radial Fields

    Get PDF
    Many techniques facilitate real-time collision detection against complex models. These typically work by pre-computing information about the spatial distribution of geometry into a form that can be quickly queried. When models deform though, expensive pre-computations are impractical. We present radial fields: a variant of distance fields parameterised in cylindrical space, rather than Cartesian space. This 2D parameterisation significantly reduces the memory and computation requirements of the field, while introducing minimal overhead in collision detection tests. The interior of the mesh is defined implicitly for the entire domain. Importantly, it maps well to the hardware rasteriser of the GPU. Radial fields are much more application-specific than traditional distance fields. For these applications - such as collision detection with articulated characters - however, the benefits are substantial

    Segmentation and Deformable Modelling Techniques for a Virtual Reality Surgical Simulator in Hepatic Oncology

    No full text
    Liver surgical resection is one of the most frequently used curative therapies. However, resectability is problematic. There is a need for a computer-assisted surgical planning and simulation system which can accurately and efficiently simulate the liver, vessels and tumours in actual patients. The present project describes the development of these core segmentation and deformable modelling techniques. For precise detection of irregularly shaped areas with indistinct boundaries, the segmentation incorporated active contours - gradient vector flow (GVF) snakes and level sets. To improve efficiency, a chessboard distance transform was used to replace part of the GVF effort. To automatically initialize the liver volume detection process, a rotating template was introduced to locate the starting slice. For shape maintenance during the segmentation process, a simplified object shape learning step was introduced to avoid occasional significant errors. Skeletonization with fuzzy connectedness was used for vessel segmentation. To achieve real-time interactivity, the deformation regime of this system was based on a single-organ mass-spring system (MSS), which introduced an on-the-fly local mesh refinement to raise the deformation accuracy and the mesh control quality. This method was now extended to a multiple soft-tissue constraint system, by supplementing it with an adaptive constraint mesh generation. A mesh quality measure was tailored based on a wide comparison of classic measures. Adjustable feature and parameter settings were thus provided, to make tissues of interest distinct from adjacent structures, keeping the mesh suitable for on-line topological transformation and deformation. More than 20 actual patient CT and 2 magnetic resonance imaging (MRI) liver datasets were tested to evaluate the performance of the segmentation method. Instrument manipulations of probing, grasping, and simple cutting were successfully simulated on deformable constraint liver tissue models. This project was implemented in conjunction with the Division of Surgery, Hammersmith Hospital, London; the preliminary reality effect was judged satisfactory by the consultant hepatic surgeon

    New geometric algorithms and data structures for collision detection of dynamically deforming objects

    Get PDF
    Any virtual environment that supports interactions between virtual objects and/or a user and objects, needs a collision detection system to handle all interactions in a physically correct or plausible way. A collision detection system is needed to determine if objects are in contact or interpenetrates. These interpenetrations are resolved by a collision handling system. Because of the fact, that in nearly all simulations objects can interact with each other, collision detection is a fundamental technology, that is needed in all these simulations, like physically based simulation, robotic path and motion planning, virtual prototyping, and many more. Most virtual environments aim to represent the real-world as realistic as possible and therefore, virtual environments getting more and more complex. Furthermore, all models in a virtual environment should interact like real objects do, if forces are applied to the objects. Nearly all real-world objects will deform or break down in its individual parts if forces are acted upon the objects. Thus deformable objects are becoming more and more common in virtual environments, which want to be as realistic as possible and thus, will present new challenges to the collision detection system. The necessary collision detection computations can be very complex and this has the effect, that the collision detection process is the performance bottleneck in most simulations. Most rigid body collision detection approaches use a BVH as acceleration data structure. This technique is perfectly suitable if the object does not change its shape. For a soft body an update step is necessary to ensure that the underlying acceleration data structure is still valid after performing a simulation step. This update step can be very time consuming, is often hard to implement and in most cases will produce a degenerated BVH after some simulation steps, if the objects generally deform. Therefore, the here presented collision detection approach works entirely without an acceleration data structure and supports rigid and soft bodies. Furthermore, we can compute inter-object and intraobject collisions of rigid and deformable objects consisting of many tens of thousands of triangles in a few milliseconds. To realize this, a subdivision of the scene into parts using a fuzzy clustering approach is applied. Based on that all further steps for each cluster can be performed in parallel and if desired, distributed to different GPUs. Tests have been performed to judge the performance of our approach against other state-of-the-art collision detection algorithms. Additionally, we integrated our approach into Bullet, a commonly used physics engine, to evaluate our algorithm. In order to make a fair comparison of different rigid body collision detection algorithms, we propose a new collision detection Benchmarking Suite. Our Benchmarking Suite can evaluate both the performance as well as the quality of the collision response. Therefore, the Benchmarking Suite is subdivided into a Performance Benchmark and a Quality Benchmark. This approach needs to be extended to support soft body collision detection algorithms in the future.Jede virtuelle Umgebung, welche eine Interaktion zwischen den virtuellen Objekten in der Szene zulässt und/oder zwischen einem Benutzer und den Objekten, benötigt für eine korrekte Behandlung der Interaktionen eine Kollisionsdetektion. Nur dank der Kollisionsdetektion können Berührungen zwischen Objekten erkannt und mittels der Kollisionsbehandlung aufgelöst werden. Dies ist der Grund für die weite Verbreitung der Kollisionsdetektion in die verschiedensten Fachbereiche, wie der physikalisch basierten Simulation, der Pfadplanung in der Robotik, dem virtuellen Prototyping und vielen weiteren. Auf Grund des Bestrebens, die reale Umgebung in der virtuellen Welt so realistisch wie möglich nachzubilden, steigt die Komplexität der Szenen stetig. Fortwährend steigen die Anforderungen an die Objekte, sich realistisch zu verhalten, sollten Kräfte auf die einzelnen Objekte ausgeübt werden. Die meisten Objekte, die uns in unserer realen Welt umgeben, ändern ihre Form oder zerbrechen in ihre Einzelteile, wenn Kräfte auf sie einwirken. Daher kommen in realitätsnahen, virtuellen Umgebungen immer häufiger deformierbare Objekte zum Einsatz, was neue Herausforderungen an die Kollisionsdetektion stellt. Die hierfür Notwendigen, teils komplexen Berechnungen, führen dazu, dass die Kollisionsdetektion häufig der Performance-Bottleneck in der jeweiligen Simulation darstellt. Die meisten Kollisionsdetektionen für starre Körper benutzen eine Hüllkörperhierarchie als Beschleunigungsdatenstruktur. Diese Technik ist hervorragend geeignet, solange sich die Form des Objektes nicht verändert. Im Fall von deformierbaren Objekten ist eine Aktualisierung der Datenstruktur nach jedem Schritt der Simulation notwendig, damit diese weiterhin gültig ist. Dieser Aktualisierungsschritt kann, je nach Hierarchie, sehr zeitaufwendig sein, ist in den meisten Fällen schwer zu implementieren und generiert nach vielen Schritten der Simulation häufig eine entartete Hüllkörperhierarchie, sollte sich das Objekt sehr stark verformen. Um dies zu vermeiden, verzichtet unsere Kollisionsdetektion vollständig auf eine Beschleunigungsdatenstruktur und unterstützt sowohl rigide, wie auch deformierbare Körper. Zugleich können wir Selbstkollisionen und Kollisionen zwischen starren und/oder deformierbaren Objekten, bestehend aus vielen Zehntausenden Dreiecken, innerhalb von wenigen Millisekunden berechnen. Um dies zu realisieren, unterteilen wir die gesamte Szene in einzelne Bereiche mittels eines Fuzzy Clustering-Verfahrens. Dies ermöglicht es, dass alle Cluster unabhängig bearbeitet werden und falls gewünscht, die Berechnungen für die einzelnen Cluster auf verschiedene Grafikkarten verteilt werden können. Um die Leistungsfähigkeit unseres Ansatzes vergleichen zu können, haben wir diesen gegen aktuelle Verfahren für die Kollisionsdetektion antreten lassen. Weiterhin haben wir unser Verfahren in die Physik-Engine Bullet integriert, um das Verhalten in dynamischen Situationen zu evaluieren. Um unterschiedliche Kollisionsdetektionsalgorithmen für starre Körper korrekt und objektiv miteinander vergleichen zu können, haben wir eine Benchmarking-Suite entwickelt. Unsere Benchmarking- Suite kann sowohl die Geschwindigkeit, für die Bestimmung, ob zwei Objekte sich durchdringen, wie auch die Qualität der berechneten Kräfte miteinander vergleichen. Hierfür ist die Benchmarking-Suite in den Performance Benchmark und den Quality Benchmark unterteilt worden. In der Zukunft wird diese Benchmarking-Suite dahingehend erweitert, dass auch Kollisionsdetektionsalgorithmen für deformierbare Objekte unterstützt werden

    NON-RIGID BODY MECHANICAL PROPERTY RECOVERY FROM IMAGES AND VIDEOS

    Get PDF
    Material property has great importance in surgical simulation and virtual reality. The mechanical properties of the human soft tissue are critical to characterize the tissue deformation of each patient. Studies have shown that the tissue stiffness described by the tissue properties may indicate abnormal pathological process. The (recovered) elasticity parameters can assist surgeons to perform better pre-op surgical planning and enable medical robots to carry out personalized surgical procedures. Traditional elasticity parameters estimation methods rely largely on known external forces measured by special devices and strain field estimated by landmarks on the deformable bodies. Or they are limited to mechanical property estimation for quasi-static deformation. For virtual reality applications such as virtual try-on, garment material capturing is of equal significance as the geometry reconstruction. In this thesis, I present novel approaches for automatically estimating the material properties of soft bodies from images or from a video capturing the motion of the deformable body. I use a coupled simulation-optimization-identification framework to deform one soft body at its original, non-deformed state to match the deformed geometry of the same object in its deformed state. The optimal set of material parameters is thereby determined by minimizing the error metric function. This method can simultaneously recover the elasticity parameters of multiple regions of soft bodies using Finite Element Method-based simulation (of either linear or nonlinear materials undergoing large deformation) and particle-swarm optimization methods. I demonstrate the effectiveness of this approach on real-time interaction with virtual organs in patient-specific surgical simulation, using parameters acquired from low-resolution medical images. With the recovered elasticity parameters and the age of the prostate cancer patients as features, I build a cancer grading and staging classifier. The classifier achieves up to 91% for predicting cancer T-Stage and 88% for predicting Gleason score. To recover the mechanical properties of soft bodies from a video, I propose a method which couples statistical graphical model with FEM simulation. Using this method, I can recover the material properties of a soft ball from a high-speed camera video that captures the motion of the ball. Furthermore, I extend the material recovery framework to fabric material identification. I propose a novel method for garment material extraction from a single-view image and a learning based cloth material recovery method from a video recording the motion of the cloth. Most recent garment capturing techniques rely on acquiring multiple views of clothing, which may not always be readily available, especially in the case of pre-existing photographs from the web. As an alternative, I propose a method that can compute a 3D model of a human body and its outfit from a single photograph with little human interaction. My proposed learning-based cloth material type recovery method exploits simulated data-set and deep neural network. I demonstrate the effectiveness of my algorithms by re-purposing the reconstructed garments for virtual try-on, garment transfer, and cloth animation on digital characters. With the recovered mechanical properties, one can construct a virtual world with soft objects exhibiting real-world behaviors.Doctor of Philosoph

    Using High-Level Processing of Low-Level Signals to Actively Assist Surgeons with Intelligent Surgical Robots

    Get PDF
    Robotic surgical systems are increasingly used for minimally-invasive surgeries. As such, there is opportunity for these systems to fundamentally change the way surgeries are performed by becoming intelligent assistants rather than simply acting as the extension of surgeons' arms. As a step towards intelligent assistance, this thesis looks at ways to represent different aspects of robot-assisted surgery (RAS). We identify three main components: the robot, the surgeon actions, and the patient scene dynamics. Traditional learning algorithms in these domains are predominantly supervised methods. This has several drawbacks. First many of these domains are non-categorical, like how soft-tissue deforms. This makes labeling difficult. Second, surgeries vary greatly. Estimation of the robot state may be affected by how the robot is docked and cable tensions in the instruments. Estimation of the patient anatomy and its dynamics are often inaccurate, and in any case, may change throughout a surgery. To obtain the most accurate information, these aspects must be learned during the procedure. This limits the amount of labeling that could be done. On the surgeon side, different surgeons may perform the same procedure differently and the algorithm should provide personalized estimations for surgeons. All of these considerations motivated the use of self-supervised learning throughout this thesis. We first build a representation of the robot system. In particular, we looked at learning the dynamics model of the robot. We evaluate the model by using it to estimate forces. Once we can estimate forces in free space, we extend the algorithm to take into account patient-specific interactions, namely with the trocar and the cannula seal. Accounting for surgery-specific interactions is possible because our method does not require additional sensors and can be trained in less than five minutes, including time for data collection. Next, we use cross-modal training to understand surgeon actions by looking at the bottleneck layer when mapping video to kinematics. This should contain information about the latent space of surgeon-actions, while discarding some medium-specific information about either the video or the kinematics. Lastly, to understand the patient scene, we start with modeling interactions between a robot instrument and a soft-tissue phantom. Models are often inaccurate due to imprecise material parameters and boundary conditions, particularly in clinical scenarios. Therefore, we add a depth camera to observe deformations to correct the results of simulations. We also introduce a network that learns to simulate soft-tissue deformation from physics simulators in order to speed up the estimation. We demonstrate that self-supervised learning can be used for understanding each part of RAS. The representations it learns contain information about signals that are not directly measurable. The self-supervised nature of the methods presented in this thesis lends itself well to learning throughout a surgery. With such frameworks, we can overcome some of the main barriers to adopting learning methods in the operating room: the variety in surgery and the difficulty in labeling enough training data for each case

    Functional surface microstructures inspired by nature – From adhesion and wetting principles to sustainable new devices

    Get PDF
    In the course of evolution nature has arrived at startling materials solutions to ensure survival. Investigations into biological surfaces, ranging from plants, insects and geckos to aquatic animals, have inspired the design of intricate surface patterns to create useful functionalities. This paper reviews the fundamental interaction mechanisms of such micropatterns with liquids, solids, and soft matter such as skin for control of wetting, self-cleaning, anti-fouling, adhesion, skin adherence, and sensing. Compared to conventional chemical strategies, the paradigm of micropatterning enables solutions with superior resource efficiency and sustainability. Associated applications range from water management and robotics to future health monitoring devices. We finally provide an overview of the relevant patterning methods as an appendix
    • …
    corecore