3,877 research outputs found

    Accurate geometry reconstruction of vascular structures using implicit splines

    Get PDF
    3-D visualization of blood vessel from standard medical datasets (e.g. CT or MRI) play an important role in many clinical situations, including the diagnosis of vessel stenosis, virtual angioscopy, vascular surgery planning and computer aided vascular surgery. However, unlike other human organs, the vasculature system is a very complex network of vessel, which makes it a very challenging task to perform its 3-D visualization. Conventional techniques of medical volume data visualization are in general not well-suited for the above-mentioned tasks. This problem can be solved by reconstructing vascular geometry. Although various methods have been proposed for reconstructing vascular structures, most of these approaches are model-based, and are usually too ideal to correctly represent the actual variation presented by the cross-sections of a vascular structure. In addition, the underlying shape is usually expressed as polygonal meshes or in parametric forms, which is very inconvenient for implementing ramification of branching. As a result, the reconstructed geometries are not suitable for computer aided diagnosis and computer guided minimally invasive vascular surgery. In this research, we develop a set of techniques associated with the geometry reconstruction of vasculatures, including segmentation, modelling, reconstruction, exploration and rendering of vascular structures. The reconstructed geometry can not only help to greatly enhance the visual quality of 3-D vascular structures, but also provide an actual geometric representation of vasculatures, which can provide various benefits. The key findings of this research are as follows: 1. A localized hybrid level-set method of segmentation has been developed to extract the vascular structures from 3-D medical datasets. 2. A skeleton-based implicit modelling technique has been proposed and applied to the reconstruction of vasculatures, which can achieve an accurate geometric reconstruction of the vascular structures as implicit surfaces in an analytical form. 3. An accelerating technique using modern GPU (Graphics Processing Unit) is devised and applied to rendering the implicitly represented vasculatures. 4. The implicitly modelled vasculature is investigated for the application of virtual angioscopy

    High-performance geometric vascular modelling

    Get PDF
    Image-based high-performance geometric vascular modelling and reconstruction is an essential component of computer-assisted surgery on the diagnosis, analysis and treatment of cardiovascular diseases. However, it is an extremely challenging task to efficiently reconstruct the accurate geometric structures of blood vessels out of medical images. For one thing, the shape of an individual section of a blood vessel is highly irregular because of the squeeze of other tissues and the deformation caused by vascular diseases. For another, a vascular system is a very complicated network of blood vessels with different types of branching structures. Although some existing vascular modelling techniques can reconstruct the geometric structure of a vascular system, they are either time-consuming or lacking sufficient accuracy. What is more, these techniques rarely consider the interior tissue of the vascular wall, which consists of complicated layered structures. As a result, it is necessary to develop a better vascular geometric modelling technique, which is not only of high performance and high accuracy in the reconstruction of vascular surfaces, but can also be used to model the interior tissue structures of the vascular walls.This research aims to develop a state-of-the-art patient-specific medical image-based geometric vascular modelling technique to solve the above problems. The main contributions of this research are:- Developed and proposed the Skeleton Marching technique to reconstruct the geometric structures of blood vessels with high performance and high accuracy. With the proposed technique, the highly complicated vascular reconstruction task is reduced to a set of simple localised geometric reconstruction tasks, which can be carried out in a parallel manner. These locally reconstructed vascular geometric segments are then combined together using shape-preserving blending operations to faithfully represent the geometric shape of the whole vascular system.- Developed and proposed the Thin Implicit Patch method to realistically model the interior geometric structures of the vascular tissues. This method allows the multi-layer interior tissue structures to be embedded inside the vascular wall to illustrate the geometric details of the blood vessel in real world

    Shape Animation with Combined Captured and Simulated Dynamics

    Get PDF
    We present a novel volumetric animation generation framework to create new types of animations from raw 3D surface or point cloud sequence of captured real performances. The framework considers as input time incoherent 3D observations of a moving shape, and is thus particularly suitable for the output of performance capture platforms. In our system, a suitable virtual representation of the actor is built from real captures that allows seamless combination and simulation with virtual external forces and objects, in which the original captured actor can be reshaped, disassembled or reassembled from user-specified virtual physics. Instead of using the dominant surface-based geometric representation of the capture, which is less suitable for volumetric effects, our pipeline exploits Centroidal Voronoi tessellation decompositions as unified volumetric representation of the real captured actor, which we show can be used seamlessly as a building block for all processing stages, from capture and tracking to virtual physic simulation. The representation makes no human specific assumption and can be used to capture and re-simulate the actor with props or other moving scenery elements. We demonstrate the potential of this pipeline for virtual reanimation of a real captured event with various unprecedented volumetric visual effects, such as volumetric distortion, erosion, morphing, gravity pull, or collisions

    Real-time human performance capture and synthesis

    Get PDF
    Most of the images one finds in the media, such as on the Internet or in textbooks and magazines, contain humans as the main point of attention. Thus, there is an inherent necessity for industry, society, and private persons to be able to thoroughly analyze and synthesize the human-related content in these images. One aspect of this analysis and subject of this thesis is to infer the 3D pose and surface deformation, using only visual information, which is also known as human performance capture. Human performance capture enables the tracking of virtual characters from real-world observations, and this is key for visual effects, games, VR, and AR, to name just a few application areas. However, traditional capture methods usually rely on expensive multi-view (marker-based) systems that are prohibitively expensive for the vast majority of people, or they use depth sensors, which are still not as common as single color cameras. Recently, some approaches have attempted to solve the task by assuming only a single RGB image is given. Nonetheless, they can either not track the dense deforming geometry of the human, such as the clothing layers, or they are far from real time, which is indispensable for many applications. To overcome these shortcomings, this thesis proposes two monocular human performance capture methods, which for the first time allow the real-time capture of the dense deforming geometry as well as an unseen 3D accuracy for pose and surface deformations. At the technical core, this work introduces novel GPU-based and data-parallel optimization strategies in conjunction with other algorithmic design choices that are all geared towards real-time performance at high accuracy. Moreover, this thesis presents a new weakly supervised multiview training strategy combined with a fully differentiable character representation that shows superior 3D accuracy. However, there is more to human-related Computer Vision than only the analysis of people in images. It is equally important to synthesize new images of humans in unseen poses and also from camera viewpoints that have not been observed in the real world. Such tools are essential for the movie industry because they, for example, allow the synthesis of photo-realistic virtual worlds with real-looking humans or of contents that are too dangerous for actors to perform on set. But also video conferencing and telepresence applications can benefit from photo-real 3D characters, as they can enhance the immersive experience of these applications. Here, the traditional Computer Graphics pipeline for rendering photo-realistic images involves many tedious and time-consuming steps that require expert knowledge and are far from real time. Traditional rendering involves character rigging and skinning, the modeling of the surface appearance properties, and physically based ray tracing. Recent learning-based methods attempt to simplify the traditional rendering pipeline and instead learn the rendering function from data resulting in methods that are easier accessible to non-experts. However, most of them model the synthesis task entirely in image space such that 3D consistency cannot be achieved, and/or they fail to model motion- and view-dependent appearance effects. To this end, this thesis presents a method and ongoing work on character synthesis, which allow the synthesis of controllable photoreal characters that achieve motion- and view-dependent appearance effects as well as 3D consistency and which run in real time. This is technically achieved by a novel coarse-to-fine geometric character representation for efficient synthesis, which can be solely supervised on multi-view imagery. Furthermore, this work shows how such a geometric representation can be combined with an implicit surface representation to boost synthesis and geometric quality.In den meisten Bildern in den heutigen Medien, wie dem Internet, Büchern und Magazinen, ist der Mensch das zentrale Objekt der Bildkomposition. Daher besteht eine inhärente Notwendigkeit für die Industrie, die Gesellschaft und auch für Privatpersonen, die auf den Mensch fokussierten Eigenschaften in den Bildern detailliert analysieren und auch synthetisieren zu können. Ein Teilaspekt der Anaylse von menschlichen Bilddaten und damit Bestandteil der Thesis ist das Rekonstruieren der 3D-Skelett-Pose und der Oberflächendeformation des Menschen anhand von visuellen Informationen, was fachsprachlich auch als Human Performance Capture bezeichnet wird. Solche Rekonstruktionsverfahren ermöglichen das Tracking von virtuellen Charakteren anhand von Beobachtungen in der echten Welt, was unabdingbar ist für Applikationen im Bereich der visuellen Effekte, Virtual und Augmented Reality, um nur einige Applikationsfelder zu nennen. Nichtsdestotrotz basieren traditionelle Tracking-Methoden auf teuren (markerbasierten) Multi-Kamera Systemen, welche für die Mehrheit der Bevölkerung nicht erschwinglich sind oder auf Tiefenkameras, die noch immer nicht so gebräuchlich sind wie herkömmliche Farbkameras. In den letzten Jahren gab es daher erste Methoden, die versuchen, das Tracking-Problem nur mit Hilfe einer Farbkamera zu lösen. Allerdings können diese entweder die Kleidung der Person im Bild nicht tracken oder die Methoden benötigen zu viel Rechenzeit, als dass sie in realen Applikationen genutzt werden könnten. Um diese Probleme zu lösen, stellt die Thesis zwei monokulare Human Performance Capture Methoden vor, die zum ersten Mal eine Echtzeit-Rechenleistung erreichen sowie im Vergleich zu vorherigen Arbeiten die Genauigkeit von Pose und Oberfläche in 3D weiter verbessern. Der Kern der Methoden beinhaltet eine neuartige GPU-basierte und datenparallelisierte Optimierungsstrategie, die im Zusammenspiel mit anderen algorithmischen Designentscheidungen akkurate Ergebnisse erzeugt und dabei eine Echtzeit-Laufzeit ermöglicht. Daneben wird eine neue, differenzierbare und schwach beaufsichtigte, Multi-Kamera basierte Trainingsstrategie in Kombination mit einem komplett differenzierbaren Charaktermodell vorgestellt, welches ungesehene 3D Präzision erreicht. Allerdings spielt nicht nur die Analyse von Menschen in Bildern in Computer Vision eine wichtige Rolle, sondern auch die Möglichkeit, neue Bilder von Personen in unterschiedlichen Posen und Kamera- Blickwinkeln synthetisch zu rendern, ohne dass solche Daten zuvor in der Realität aufgenommen wurden. Diese Methoden sind unabdingbar für die Filmindustrie, da sie es zum Beispiel ermöglichen, fotorealistische virtuelle Welten mit real aussehenden Menschen zu erzeugen, sowie die Möglichkeit bieten, Szenen, die für den Schauspieler zu gefährlich sind, virtuell zu produzieren, ohne dass eine reale Person diese Aktionen tatsächlich ausführen muss. Aber auch Videokonferenzen und Telepresence-Applikationen können von fotorealistischen 3D-Charakteren profitieren, da diese die immersive Erfahrung von solchen Applikationen verstärken. Traditionelle Verfahren zum Rendern von fotorealistischen Bildern involvieren viele mühsame und zeitintensive Schritte, welche Expertenwissen vorraussetzen und zudem auch Rechenzeiten erreichen, die jenseits von Echtzeit sind. Diese Schritte beinhalten das Rigging und Skinning von virtuellen Charakteren, das Modellieren von Reflektions- und Materialeigenschaften sowie physikalisch basiertes Ray Tracing. Vor Kurzem haben Deep Learning-basierte Methoden versucht, die Rendering-Funktion von Daten zu lernen, was in Verfahren resultierte, die eine Nutzung durch Nicht-Experten ermöglicht. Allerdings basieren die meisten Methoden auf Synthese-Verfahren im 2D-Bildbereich und können daher keine 3D-Konsistenz garantieren. Darüber hinaus gelingt es den meisten Methoden auch nicht, bewegungs- und blickwinkelabhängige Effekte zu erzeugen. Daher präsentiert diese Thesis eine neue Methode und eine laufende Forschungsarbeit zum Thema Charakter-Synthese, die es erlauben, fotorealistische und kontrollierbare 3D-Charakteren synthetisch zu rendern, die nicht nur 3D-konsistent sind, sondern auch bewegungs- und blickwinkelabhängige Effekte modellieren und Echtzeit-Rechenzeiten ermöglichen. Dazu wird eine neuartige Grobzu- Fein-Charakterrepräsentation für effiziente Bild-Synthese von Menschen vorgestellt, welche nur anhand von Multi-Kamera-Daten trainiert werden kann. Daneben wird gezeigt, wie diese explizite Geometrie- Repräsentation mit einer impliziten Oberflächendarstellung kombiniert werden kann, was eine bessere Synthese von geomtrischen Deformationen sowie Bildern ermöglicht.ERC Consolidator Grant 4DRepL

    Virtual prototyping with surface reconstruction and freeform geometric modeling using level-set method

    Get PDF
    More and more products with complex geometries are being designed and manufactured by computer aided design (CAD) and rapid prototyping (RP) technologies. Freeform surface is a geometrical feature widely used in modern products like car bodies, airfoils and turbine blades as well as in aesthetic artifacts. How to efficiently design and generate digital prototypes with freeform surfaces is an important issue in CAD. This paper presents the development of a Virtual Sculpting system and addresses the issues of surface reconstruction from dexel data structures and freeform geometric modeling using the level-set method from distance field structure. Our virtual sculpting method is based on the metaphor of carving a solid block into a 3D freeform object using a 3D haptic input device integrated with the computer visualization. This dissertation presents the result of the study and consists primarily of four papers --Abstract, page iv

    Data-driven modelling of biological multi-scale processes

    Full text link
    Biological processes involve a variety of spatial and temporal scales. A holistic understanding of many biological processes therefore requires multi-scale models which capture the relevant properties on all these scales. In this manuscript we review mathematical modelling approaches used to describe the individual spatial scales and how they are integrated into holistic models. We discuss the relation between spatial and temporal scales and the implication of that on multi-scale modelling. Based upon this overview over state-of-the-art modelling approaches, we formulate key challenges in mathematical and computational modelling of biological multi-scale and multi-physics processes. In particular, we considered the availability of analysis tools for multi-scale models and model-based multi-scale data integration. We provide a compact review of methods for model-based data integration and model-based hypothesis testing. Furthermore, novel approaches and recent trends are discussed, including computation time reduction using reduced order and surrogate models, which contribute to the solution of inference problems. We conclude the manuscript by providing a few ideas for the development of tailored multi-scale inference methods.Comment: This manuscript will appear in the Journal of Coupled Systems and Multiscale Dynamics (American Scientific Publishers

    Analysis of Venous Blood Flow and Deformation in the Calf under External Compression

    No full text
    Deep vein thrombosis (DVT) is a common post-operative complication, and a serious threat to the patient’s general recovery. In recent years, there has been increasing awareness of the risk of DVT in healthy individuals after prolonged immobility, such as people taking long-period flights or sitting at a computer. Mechanical methods of DVT prophylaxis, such as compression stockings, have gained widespread acceptance, but the haemodynamic mechanism of their action is still not well understood. In this study, computational modelling approaches based on magnetic resonance (MR) images are used to (i) predict the deformation of calf and deep veins under external compression, (ii) determine blood flow and wall shear stress in the deep veins of the calf, and (iii) quantify the effect of external compression on flow and wall shear stress in the deep veins. As a first step, MR images of the calf obtained with and without external compression were analysed, which indicated different levels of compressibility for different calf muscle compartments. A 2D finite element model (FEM) with specifically tailored boundary conditions for different muscle components was developed to simulate the deformation of the calf under compression. The calf tissues were described by a linear elastic model. The simulation results showed a good qualitative agreement with the measurements in terms of deep vein deformation, but the area reduction predicted by the FEM was much larger than that obtained from the MR images. In an attempt to improve the 2D FEM, a hyperelastic material model was employed and a finite element based non-rigid registration algorithm was developed to calculate the bulk modulus of the calf tissues. Using subject-specific bulk modulus derived with this method together with a hyperelastic material model, the numerical results showed better quantitative agreement with MR measured deformations of deep veins and calf tissues. In order to understand the effect of external compression on flow in the deep veins, MR imaging and real-time flow mapping were performed on 10 healthy volunteers before and after compression. Computational fluid dynamics was then employed to calculate the haemodynamic wall shear stress (WSS), based on the measured changes in vessel geometry and flow waveforms. The overall results indicated that application of the compression stocking led to a reduction in both blood flow rate and cross sectional area of the peroneal veins in the calf, which resulted in an increase in WSS, but the individual effects were highly variable. Finally, a 3D fluid-structure interactions (FSI) model was developed for a segment of the calf with realistic geometry for the calf muscle and bones but idealised geometry for the deep vein. The hyperelastic material properties evaluated previously were employed to describe the solid behaviours. Some predictive ability of the FSI model was demonstrated, but further improvement and validation are still needed
    • …
    corecore