99 research outputs found

    Perfectly Matched Layers in a Divergence Preserving ADI Scheme for Electromagnetics

    Full text link
    For numerical simulations of highly relativistic and transversely accelerated charged particles including radiation fast algorithms are needed. While the radiation in particle accelerators has wavelengths in the order of 100 um the computational domain has dimensions roughly 5 orders of magnitude larger resulting in very large mesh sizes. The particles are confined to a small area of this domain only. To resolve the smallest scales close to the particles subgrids are envisioned. For reasons of stability the alternating direction implicit (ADI) scheme by D. N. Smithe et al. (J. Comput. Phys. 228 (2009) pp.7289-7299) for Maxwell equations has been adopted. At the boundary of the domain absorbing boundary conditions have to be employed to prevent reflection of the radiation. In this paper we show how the divergence preserving ADI scheme has to be formulated in perfectly matched layers (PML) and compare the performance in several scenarios.Comment: 8 pages, 6 figure

    A Parallel General Purpose Multi-Objective Optimization Framework, with Application to Beam Dynamics

    Full text link
    Particle accelerators are invaluable tools for research in the basic and applied sciences, in fields such as materials science, chemistry, the biosciences, particle physics, nuclear physics and medicine. The design, commissioning, and operation of accelerator facilities is a non-trivial task, due to the large number of control parameters and the complex interplay of several conflicting design goals. We propose to tackle this problem by means of multi-objective optimization algorithms which also facilitate a parallel deployment. In order to compute solutions in a meaningful time frame a fast and scalable software framework is required. In this paper, we present the implementation of such a general-purpose framework for simulation-based multi-objective optimization methods that allows the automatic investigation of optimal sets of machine parameters. The implementation is based on a master/slave paradigm, employing several masters that govern a set of slaves executing simulations and performing optimization tasks. Using evolutionary algorithms as the optimizer and OPAL as the forward solver, validation experiments and results of multi-objective optimization problems in the domain of beam dynamics are presented. The high charge beam line at the Argonne Wakefield Accelerator Facility was used as the beam dynamics model. The 3D beam size, transverse momentum, and energy spread were optimized

    A Scalable Approach to Network Enabled Servers

    Full text link

    An Alternative Parameterization of R-matrix Theory

    Get PDF
    An alternative parameterization of R-matrix theory is presented which is mathematically equivalent to the standard approach, but possesses features which simplify the fitting of experimental data. In particular there are no level shifts and no boundary-condition constants which allows the positions and partial widths of an arbitrary number levels to be easily fixed in an analysis. These alternative parameters can be converted to standard R-matrix parameters by a straightforward matrix diagonalization procedure. In addition it is possible to express the collision matrix directly in terms of the alternative parameters.Comment: 8 pages; accepted for publication in Phys. Rev. C; expanded Sec. IV, added Sec. VI, added Appendix, corrected typo

    A Fast Parallel Poisson Solver on Irregular Domains Applied to Beam Dynamic Simulations

    Full text link
    We discuss the scalable parallel solution of the Poisson equation within a Particle-In-Cell (PIC) code for the simulation of electron beams in particle accelerators of irregular shape. The problem is discretized by Finite Differences. Depending on the treatment of the Dirichlet boundary the resulting system of equations is symmetric or `mildly' nonsymmetric positive definite. In all cases, the system is solved by the preconditioned conjugate gradient algorithm with smoothed aggregation (SA) based algebraic multigrid (AMG) preconditioning. We investigate variants of the implementation of SA-AMG that lead to considerable improvements in the execution times. We demonstrate good scalability of the solver on distributed memory parallel processor with up to 2048 processors. We also compare our SAAMG-PCG solver with an FFT-based solver that is more commonly used for applications in beam dynamics

    Exchanging knowledge to improve organic arable farming: an evaluation of knowledge exchange tools with farmer groups across Europe

    Get PDF
    Organic farming is knowledge intensive. To support farmers in improving yields and organic agriculture systems, there is a need to improve how knowledge is shared. There is an established culture of sharing ideas, successes and failures in farming. The internet and information technologies open up new opportunities for knowledge exchange involving farmers, researchers, advisors and other practitioners. The OK-Net Arable brought together practitioners from regional Farmer Innovation Groups across Europe in a multi-actor project to explore how online knowledge exchange could be improved. Feedback from the groups was obtained for 35 ‘tools’, defined as end-user materials, such as technical guides, videos and websites informing about practices in organic agriculture. The groups also selected one practice to test on farms, sharing their experiences with others through workshops, exchange visits and through videos

    Computationally-Optimized Bone Mechanical Modeling from High-Resolution Structural Images

    Get PDF
    Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE) simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time), which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM). To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging

    On computational approaches for size-and-shape distributions from sedimentation velocity analytical ultracentrifugation

    Get PDF
    Sedimentation velocity analytical ultracentrifugation has become a very popular technique to study size distributions and interactions of macromolecules. Recently, a method termed two-dimensional spectrum analysis (2DSA) for the determination of size-and-shape distributions was described by Demeler and colleagues (Eur Biophys J 2009). It is based on novel ideas conceived for fitting the integral equations of the size-and-shape distribution to experimental data, illustrated with an example but provided without proof of the principle of the algorithm. In the present work, we examine the 2DSA algorithm by comparison with the mathematical reference frame and simple well-known numerical concepts for solving Fredholm integral equations, and test the key assumptions underlying the 2DSA method in an example application. While the 2DSA appears computationally excessively wasteful, key elements also appear to be in conflict with mathematical results. This raises doubts about the correctness of the results from 2DSA analysis
    corecore