165 research outputs found

    A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TRIANGULATED SURFACES

    Get PDF
    This paper presents an efficient, fine-grained parallel algorithm for solving the Eikonal equation on triangular meshes. The Eikonal equation, and the broader class of Hamilton-Jacobi equations to which it belongs, have a wide range of applications from geometric optics and seismology to biological modeling and analysis of geometry and images. The ability to solve such equations accurately and efficiently provides new capabilities for exploring and visualizing parameter spaces and for solving inverse problems that rely on such equations in the forward model. Efficient solvers on state-of-the-art, parallel architectures require new algorithms that are not, in many cases, optimal, but are better suited to synchronous updates of the solution. In previous work [W. K. Jeong and R. T. Whitaker, SIAM J. Sci. Comput., 30 (2008), pp. 2512-2534], the authors proposed the fast iterative method (FIM) to efficiently solve the Eikonal equation on regular grids. In this paper we extend the fast iterative method to solve Eikonal equations efficiently on triangulated domains on the CPU and on parallel architectures, including graphics processors. We propose a new local update scheme that provides solutions of first-order accuracy for both architectures. We also propose a novel triangle-based update scheme and its corresponding data structure for efficient irregular data mapping to parallel single-instruction multiple-data (SIMD) processors. We provide detailed descriptions of the implementations on a single CPU, a multicore CPU with shared memory, and SIMD architectures with comparative results against state-of-the-art Eikonal solvers.open4

    Doctor of Philosophy

    Get PDF
    dissertationPartial differential equations (PDEs) are widely used in science and engineering to model phenomena such as sound, heat, and electrostatics. In many practical science and engineering applications, the solutions of PDEs require the tessellation of computational domains into unstructured meshes and entail computationally expensive and time-consuming processes. Therefore, efficient and fast PDE solving techniques on unstructured meshes are important in these applications. Relative to CPUs, the faster growth curves in the speed and greater power efficiency of the SIMD streaming processors, such as GPUs, have gained them an increasingly important role in the high-performance computing area. Combining suitable parallel algorithms and these streaming processors, we can develop very efficient numerical solvers of PDEs. The contributions of this dissertation are twofold: proposal of two general strategies to design efficient PDE solvers on GPUs and the specific applications of these strategies to solve different types of PDEs. Specifically, this dissertation consists of four parts. First, we describe the general strategies, the domain decomposition strategy and the hybrid gathering strategy. Next, we introduce a parallel algorithm for solving the eikonal equation on fully unstructured meshes efficiently. Third, we present the algorithms and data structures necessary to move the entire FEM pipeline to the GPU. Fourth, we propose a parallel algorithm for solving the levelset equation on fully unstructured 2D or 3D meshes or manifolds. This algorithm combines a narrowband scheme with domain decomposition for efficient levelset equation solving

    Laplacian regularized eikonal equation with Soner boundary condition on polyhedral meshes

    Full text link
    In this paper, we propose a numerical algorithm based on a cell-centered finite volume method to compute a distance from given objects on a three-dimensional computational domain discretized by polyhedral cells. Inspired by the vanishing viscosity method, a Laplacian regularized eikonal equation is solved and the Soner boundary condition is applied to the boundary of the domain to avoid a non-viscosity solution. As the regularization parameter depending on a characteristic length of the discretized domain is reduced, a corresponding numerical solution is calculated. A convergence to the viscosity solution is verified numerically as the characteristic length becomes smaller and the regularization parameter accordingly becomes smaller. From the numerical experiments, the second experimental order of convergence in the L1L^1 norm error is confirmed for smooth solutions. Compared to solve a time-dependent form of eikonal equation, the Laplacian regularized eikonal equation has the advantage of reducing computational cost dramatically when a more significant number of cells is used or a region of interest is far away from the given objects. Moreover, the implementation of parallel computing using domain decomposition with 11-ring face neighborhood structure can be done straightforwardly by a standard cell-centered finite volume code

    Fast and accurate front propagation for simulation of geological folds

    Get PDF
    Front propagations described by static Hamilton-Jacobi equations can be used to simulate folded geological structures. Simulations of geological folds are a key ingredient in the Compound Earth Simulator (CES), an industrial software tool used in the exploration of oil and gas. In this thesis, local approximation techniques are investigated with respect to accuracy and efficiency. Several novel algorithms are also introduced, of which some are accelerated by parallel implementations on both multicore CPUs and Graphic Processing Units. These algorithms are able to simulate folds at a fraction of the time needed by the CES industry code, while retaining the same level of accuracy. Complicated tasks that previously needed several minutes to be computed can now be performed in just a matter of a few seconds, thus significantly improving the CES user experience

    3-D Traveltime Modeling With Application To Seismic Imaging And Tomography

    Get PDF
    Fast algorithms exist for performing traveltime modeling, even in three dimensions. These algorithms have the nice property that the computational time and memory requirements scale linearly with the number of grid points used represent subsurface velocities in discrete form. While traveltime modeling is typically used to predict first arrival times, later arrivals can also be simulated through the incorporation of a priori reflector information. For two-dimensional seismic imaging and tomography applications, the traveltime modeling algorithms presented here greatly expedite solution and can be readily deployed on distributed-memory parallel computers. Three-dimensional applications present a greater challenge, but by coupling an understanding of algorithm complexity with the promise of faster computers having greater quantities of physical memory, one can begin to predict future capabilities

    Doctor of Philosophy in Computing

    Get PDF
    dissertationStatistical shape analysis has emerged as an important tool for the quantitative analysis of anatomy in many medical imaging applications. The correspondence based approach to evaluate shape variability is a popular method, based on comparing configurations of carefully placed landmarks on each shape. In recent years, methods for automatic placement of landmarks have enhanced the ability of this approach to capture statistical properties of shape populations. However, biomedical shapes continue to present considerable difficulties in automatic correspondence optimization due to inherent geometric complexity and the need to correlate shape change with underlying biological parameters. This dissertation addresses these technical difficulties and presents improved shape correspondence models. In particular, this dissertation builds on the particle-based modeling (PBM) framework described by Joshua Cates' 2010 Ph.D. dissertation. In the PBM framework, correspondences are modeled as a set of dynamic points or a particle system, positioned automatically on shape surfaces by optimizing entropy contained in the model, with the idea of balancing model simplicity against accuracy of the particle system representation of shapes. This dissertation is a collection of four papers that extend the PBM framework to include shape regression and longitudinal analysis and also adds new methods to improve modeling of complex shapes. It also includes a summary of two applications from the field of orthopaedics. Technical details of the PBM framework are provided in Chapter 2, after which the first topic related to the study of shape change over time is addressed (Chapters 3 and 4). In analyses of normative growth or disease progression, shape regression models allow characterization of the underlying biological process while also facilitating comparison of a sample against a normative model. The first paper introduces a shape regression model into the PBM framework to characterize shape variability due to an underlying biological parameter. It further confirms the statistical significance of this relationship via systematic permutation testing. Simple regression models are, however, not sufficient to leverage information provided by longitudinal studies. Longitudinal studies collect data at multiple time points for each participant and have the potential to provide a rich picture of the anatomical changes occurring during development, disease progression, or recovery. The second paper presents a linear-mixed-effects (LME) shape model in order to fully leverage the high-dimensional, complex features provided by longitudinal data. The parameters of the LME shape model are estimated in a hierarchical manner within the PBM framework. The topic of geometric complexity present in certain biological shapes is addressed next (Chapters 5 and 6). Certain biological shapes are inherently complex and highly variable, inhibiting correspondence based methods from producing a faithful representation of the average shape. In the PBM framework, use of Euclidean distances leads to incorrect particle system interactions while a position-only representation leads to incorrect correspondences around sharp features across shapes. The third paper extends the PBM framework to use efficiently computed geodesic distances and also adds an entropy term based on the surface normal. The fourth paper further replaces the position-only representation with a more robust distance-from-landmark feature in the PBM framework to obtain isometry invariant correspondences. Finally, the above methods are applied to two applications from the field of orthopaedics. The first application uses correspondences across an ensemble of human femurs to characterize morphological shape differences due to femoroacetabular impingement. The second application involves an investigation of the short bone phenotype apparent in mouse models of multiple osteochondromas. Metaphyseal volume deviations are correlated with deviations in length to quantify the effect of cancer toward the apparent shortening of long bones (femur, tibia-fibula) in mouse models

    Novel Methods to Incorporate Physiological Prior Knowledge into the Inverse Problem of Electrocardiography - Application to Localization of Ventricular Excitation Origins

    Get PDF
    17 Millionen Todesfälle jedes Jahr werden auf kardiovaskuläre Erkankungen zurückgeführt. Plötzlicher Herztod tritt bei ca. 25% der Patienten mit kardiovaskulären Erkrankungen auf und kann mit ventrikulärer Tachykardie in Verbindung gebracht werden. Ein wichtiger Schritt für die Behandlung von ventrikulärer Tachykardie ist die Detektion sogenannter Exit-Points, d.h. des räumlichen Ursprungs der Erregung. Da dieser Prozess sehr zeitaufwändig ist und nur von fähigen Kardiologen durchgeführt werden kann, gibt es eine Notwendigkeit für assistierende Lokalisationsmöglichkeiten, idealerweise automatisch und nichtinvasiv. Elektrokardiographische Bildgebung versucht, diesen klinischen Anforderungen zu genügen, indem die elektrische Aktivität des Herzens aus Messungen der Potentiale auf der Körperoberfläche rekonstruiert wird. Die resultierenden Informationen können verwendet werden, um den Erregungsursprung zu detektieren. Aktuelle Methoden um das inverse Problem zu lösen weisen jedoch entweder eine geringe Genauigkeit oder Robustheit auf, was ihren klinischen Nutzen einschränkt. Diese Arbeit analysiert zunächst das Vorwärtsproblem im Zusammenhang mit zwei Quellmodellen: Transmembranspannungen und extrazelluläre Potentiale. Die mathematischen Eigenschaften der Relation zwischen den Quellen des Herzens und der Körperoberflächenpotentiale werden systematisch analysiert und der Einfluss auf das inverse Problem verdeutlicht. Dieses Wissen wird anschließend zur Lösung des inversen Problems genutzt. Hierzu werden drei neue Methoden eingeführt: eine verzögerungsbasierte Regularisierung, eine Methode basierend auf einer Regression von Körperoberflächenpotentialen und eine Deep-Learning-basierte Lokalisierungsmethode. Diese drei Methoden werden in einem simulierten und zwei klinischen Setups vier etablierten Methoden gegenübergestellt und bewertet. Auf dem simulierten Datensatz und auf einem der beiden klinischen Datensätze erzielte eine der neuen Methoden bessere Ergebnisse als die konventionellen Ansätze, während Tikhonov-Regularisierung auf dem verbleibenden klinischen Datensatz die besten Ergebnisse erzielte. Potentielle Ursachen für diese Ergebnisse werden diskutiert und mit Eigenschaften des Vorwärtsproblems in Verbindung gebracht

    Vascular Tree Structure: Fast Curvature Regularization and Validation

    Get PDF
    This work addresses the challenging problem of accurate vessel structure analysis in high resolution 3D biomedical images. Typical segmentation methods fail on recent micro-CT data sets resolving near-capillary vessels due to limitations of standard first-order regularization models. While regularization is needed to address noise and partial volume issues in the data, we argue that extraction of thin tubular structures requires higher-order curvature-based regularization. There are no standard segmentation methods regularizing surface curvature in 3D that could be applied to large 3D volumes. However, we observe that standard measures for vessels structure are more concerned with topology, bifurcation angles, and other parameters that can be directly addressed without segmentation. We propose a novel methodology reconstructing tree structure of the vessels using a new centerline curvature regularization technique. Our high-order regularization model is based on a recent curvature estimation method. We developed a Levenberg-Marquardt optimization scheme and an efficient GPU-based implementation of our algorithm. We also propose a validation mechanism based on synthetic vessel images. Our preliminary results on real ultra-resolution micro CT volumes are promising
    corecore