1,681 research outputs found

    Extension of the Finite Integration Technique including dynamic mesh refinement and its application to self-consistent beam dynamics simulations

    Full text link
    An extension of the framework of the Finite Integration Technique (FIT) including dynamic and adaptive mesh refinement is presented. After recalling the standard formulation of the FIT, the proposed mesh adaptation procedure is described. Besides the linear interpolation approach, a novel interpolation technique based on specialized spline functions for approximating the discrete electromagnetic field solution during mesh adaptation is introduced. The standard FIT on a fixed mesh and the new adaptive approach are applied to a simulation test case with known analytical solution. The numerical accuracy of the two methods are shown to be comparable. The dynamic mesh approach is, however, much more efficient. This is also demonstrated for the full scale modeling of the complete RF gun at the Photo Injector Test Facility DESY Zeuthen (PITZ) on a single computer. Results of a detailed design study addressing the effects of individual components of the gun onto the beam emittance using a fully self-consistent approach are presented.Comment: 33 pages, 14 figures, 4 table

    Error Estimation and Adaptive Refinement of Finite Element Thin Plate Spline

    Get PDF
    The thin plate spline smoother is a data fitting and smoothing technique that captures important patterns of potentially noisy data. However, it is computationally expensive for large data sets. The finite element thin plate spline smoother (TPSFEM) combines the thin plate spline smoother and finite element surface fitting to efficiently interpolate large data sets. When the TPSFEM uses uniform finite element grids, it may require a fine grid to achieve the desired accuracy. Adaptive refinement uses error indicators to identify sensitive regions and adapts the precision of the solution dynamically, which reduces the computational cost to achieve the required accuracy. Traditional error indicators were developed for the finite element method to approximate partial differential equations and may not be applicable for the TPSFEM. We examined techniques that may indicate errors for the TPSFEM and adapted four traditional error indicators that use different information to produce efficient adaptive grids. The iterative adaptive refinement process has also been adjusted to handle additional complexities caused by the TPSFEM. The four error indicators presented in this thesis are the auxiliary problem error indicator, recovery-based error indicator, norm-based error indicator and residual-based error indicator. The auxiliary problem error indicator approximates the error by solving auxiliary problems to evaluate approximation quality. The recovery-based error indicator calculates the error by post-processing discontinuous gradients of the TPSFEM. The norm-based error indicator uses an error bound on the interpolation error to indicate large errors. The residual-based error indicator computes interior element residuals and jumps of gradients across elements to estimate the energy norm of the error. Numerical experiments were conducted to evaluate the error indicators' performance on producing efficient adaptive grids, which are measured by the error versus the number of nodes in the grid. A set of one and two-dimensional model problems with various features are chosen to examine the effectiveness of the error indicators. As opposed to the finite element method, error indicators of the TPSFEM may also be affected by noise, data distribution patterns, data sizes and boundary conditions, which are assessed in the experiments. It is found that adaptive grids are significantly more efficient than uniform grids for two-dimensional model problems with difficulties like peaks and singularities. While the TPSFEM may not recover the original solution in the presence of noise or scarce data, error indicators still produce more efficient grids. We also learned that the difference is less obvious when the data has mostly smooth or oscillatory surfaces. Some error indicators that use data may be affected by data distribution patterns and boundary conditions, but the others are robust and produce stable results. Our error indicators also successfully identify sensitive regions for one-dimensional data sets. Lastly, when errors of the TPSFEM cannot be further reduced due to factors like noise, new stopping criteria terminate the iterative process aptly

    Smooth representation of thin shells and volume structures for isogeometric analysis

    Get PDF
    The purpose of this study is to develop self-contained methods for obtaining smooth meshes which are compatible with isogeometric analysis (IGA). The study contains three main parts. We start by developing a better understanding of shapes and splines through the study of an image-related problem. Then we proceed towards obtaining smooth volumetric meshes of the given voxel-based images. Finally, we treat the smoothness issue on the multi-patch domains with C1 coupling. Following are the highlights of each part. First, we present a B-spline convolution method for boundary representation of voxel-based images. We adopt the filtering technique to compute the B-spline coefficients and gradients of the images effectively. We then implement the B-spline convolution for developing a non-rigid images registration method. The proposed method is in some sense of “isoparametric”, for which all the computation is done within the B-splines framework. Particularly, updating the images by using B-spline composition promote smooth transformation map between the images. We show the possible medical applications of our method by applying it for registration of brain images. Secondly, we develop a self-contained volumetric parametrization method based on the B-splines boundary representation. We aim to convert a given voxel-based data to a matching C1 representation with hierarchical cubic splines. The concept of the osculating circle is employed to enhance the geometric approximation, where it is done by a single template and linear transformations (scaling, translations, and rotations) without the need for solving an optimization problem. Moreover, we use the Laplacian smoothing and refinement techniques to avoid irregular meshes and to improve mesh quality. We show with several examples that the method is capable of handling complex 2D and 3D configurations. In particular, we parametrize the 3D Stanford bunny which contains irregular shapes and voids. Finally, we propose the B´ezier ordinates approach and splines approach for C1 coupling. In the first approach, the new basis functions are defined in terms of the B´ezier Bernstein polynomials. For the second approach, the new basis is defined as a linear combination of C0 basis functions. The methods are not limited to planar or bilinear mappings. They allow the modeling of solutions to fourth order partial differential equations (PDEs) on complex geometric domains, provided that the given patches are G1 continuous. Both methods have their advantages. In particular, the B´ezier approach offer more degree of freedoms, while the spline approach is more computationally efficient. In addition, we proposed partial degree elevation to overcome the C1-locking issue caused by the over constraining of the solution space. We demonstrate the potential of the resulting C1 basis functions for application in IGA which involve fourth order PDEs such as those appearing in Kirchhoff-Love shell models, Cahn-Hilliard phase field application, and biharmonic problems

    Regularized Surface and Point Landmarks Based Efficient Non-Rigid Medical Image Registration

    Get PDF
    Medical image registration is one of the fundamental tasks in medical image processing. It has various applications in field of image guided surgery (IGS) and computer assisted diagnosis (CAD). A set of non-linear methods have been already developed for inter-subject and intra-subject 3D medical image registration. However, efficient registration in terms of accuracy and speed is one of the most demanded of today surgical navigation (SN) systems. This paper is a result of a series of experiments which utilizes Fast Radial Basis Function (RBF) technique to register one or more medical images non-rigidly. Initially, a set of curves are extracted using a combined watershed and active contours algorithm and then tiled and converted to a regular surface using a global parameterization algorithm. It is shown that the registration accuracy improves when higher number of salient features (i.e. anatomical point landmarks and surfaces) are used and it also has no impact on the speed of the algorithm. The results show that the target registration error is less than 2 mm and has sub-second performance on intra-subject registration of MR image real datasets. It is observed that the Fast RBF algorithm is relatively insensitive to the increasing number of point landmarks used as compared with the competing feature based algorithms

    DCTM: Discrete-Continuous Transformation Matching for Semantic Flow

    Full text link
    Techniques for dense semantic correspondence have provided limited ability to deal with the geometric variations that commonly exist between semantically similar images. While variations due to scale and rotation have been examined, there lack practical solutions for more complex deformations such as affine transformations because of the tremendous size of the associated solution space. To address this problem, we present a discrete-continuous transformation matching (DCTM) framework where dense affine transformation fields are inferred through a discrete label optimization in which the labels are iteratively updated via continuous regularization. In this way, our approach draws solutions from the continuous space of affine transformations in a manner that can be computed efficiently through constant-time edge-aware filtering and a proposed affine-varying CNN-based descriptor. Experimental results show that this model outperforms the state-of-the-art methods for dense semantic correspondence on various benchmarks

    Physics-Based Learning Models for Ship Hydrodynamics

    Get PDF
    We present the concepts of physics-based learning models (PBLM) and their relevance and application to the field of ship hydrodynamics. The utility of physics-based learning is motivated by contrasting generic learning models for regression predictions, which do not presume any knowledge of the system other than the training data provided with methods such as semi-empirical models, which incorporate physical insights along with data-fitting. PBLM provides a framework wherein intermediate models, which capture (some) physical aspects of the problem, are incorporated into modern generic learning tools to substantially improve the predictions of the latter, minimizing the reliance on costly experimental measurements or high-resolution high-fidelity numerical solutions. To illustrate the versatility and efficacy of PBLM, we present three wave-ship interaction problems: 1) at speed waterline profiles; 2) ship motions in head seas; and 3) three-dimensional breaking bow waves. PBLM is shown to be robust and produce error rates at or below the uncertainty in the generated data at a small fraction of the expense of high-resolution numerical predictions.United States. Office of Naval Researc

    Interactive Medical Image Registration With Multigrid Methods and Bounded Biharmonic Functions

    Get PDF
    Interactive image registration is important in some medical applications since automatic image registration is often slow and sometimes error-prone. We consider interactive registration methods that incorporate user-specified local transforms around control handles. The deformation between handles is interpolated by some smooth functions, minimizing some variational energies. Besides smoothness, we expect the impact of a control handle to be local. Therefore we choose bounded biharmonic weight functions to blend local transforms, a cutting-edge technique in computer graphics. However, medical images are usually huge, and this technique takes a lot of time that makes itself impracticable for interactive image registration. To expedite this process, we use a multigrid active set method to solve bounded biharmonic functions (BBF). The multigrid approach is for two scenarios, refining the active set from coarse to fine resolutions, and solving the linear systems constrained by working active sets. We\u27ve implemented both weighted Jacobi method and successive over-relaxation (SOR) in the multigrid solver. Since the problem has box constraints, we cannot directly use regular updates in Jacobi and SOR methods. Instead, we choose a descent step size and clamp the update to satisfy the box constraints. We explore the ways to choose step sizes and discuss their relation to the spectral radii of the iteration matrices. The relaxation factors, which are closely related to step sizes, are estimated by analyzing the eigenvalues of the bilaplacian matrices. We give a proof about the termination of our algorithm and provide some theoretical error bounds. Another minor problem we address is to register big images on GPU with limited memory. We\u27ve implemented an image registration algorithm with virtual image slices on GPU. An image slice is treated similarly to a page in virtual memory. We execute a wavefront of subtasks together to reduce the number of data transfers. Our main contribution is a fast multigrid method for interactive medical image registration that uses bounded biharmonic functions to blend local transforms. We report a novel multigrid approach to refine active set quickly and use clamped updates based on weighted Jacobi and SOR. This multigrid method can be used to efficiently solve other quadratic programs that have active sets distributed over continuous regions
    • …
    corecore