97 research outputs found

    An efficient mixed-precision, hybrid CPU-GPU implementation of a fully implicit particle-in-cell algorithm

    Full text link
    Recently, a fully implicit, energy- and charge-conserving particle-in-cell method has been proposed for multi-scale, full-f kinetic simulations [G. Chen, et al., J. Comput. Phys. 230,18 (2011)]. The method employs a Jacobian-free Newton-Krylov (JFNK) solver, capable of using very large timesteps without loss of numerical stability or accuracy. A fundamental feature of the method is the segregation of particle-orbit computations from the field solver, while remaining fully self-consistent. This paper describes a very efficient, mixed-precision hybrid CPU-GPU implementation of the implicit PIC algorithm exploiting this feature. The JFNK solver is kept on the CPU in double precision (DP), while the implicit, charge-conserving, and adaptive particle mover is implemented on a GPU (graphics processing unit) using CUDA in single-precision (SP). Performance-oriented optimizations are introduced with the aid of the roofline model. The implicit particle mover algorithm is shown to achieve up to 400 GOp/s on a Nvidia GeForce GTX580. This corresponds to 25% absolute GPU efficiency against the peak theoretical performance, and is about 300 times faster than an equivalent serial CPU (Intel Xeon X5460) execution. For the test case chosen, the mixed-precision hybrid CPU-GPU solver is shown to over-perform the DP CPU-only serial version by a factor of \sim 100, without apparent loss of robustness or accuracy in a challenging long-timescale ion acoustic wave simulation.Comment: 25 pages, 6 figures, submitted to J. Comput. Phy

    An Energy- and Charge-conserving, Implicit, Electrostatic Particle-in-Cell Algorithm

    Full text link
    This paper discusses a novel fully implicit formulation for a 1D electrostatic particle-in-cell (PIC) plasma simulation approach. Unlike earlier implicit electrostatic PIC approaches (which are based on a linearized Vlasov-Poisson formulation), ours is based on a nonlinearly converged Vlasov-Amp\`ere (VA) model. By iterating particles and fields to a tight nonlinear convergence tolerance, the approach features superior stability and accuracy properties, avoiding most of the accuracy pitfalls in earlier implicit PIC implementations. In particular, the formulation is stable against temporal (CFL) and spatial (aliasing) instabilities. It is charge- and energy-conserving to numerical roundoff for arbitrary implicit time steps. While momentum is not exactly conserved, errors are kept small by an adaptive particle sub-stepping orbit integrator, which is instrumental to prevent particle tunneling. The VA model is orbit-averaged along particle orbits to enforce an energy conservation theorem with particle sub-stepping. As a result, very large time steps, constrained only by the dynamical time scale of interest, are possible without accuracy loss. Algorithmically, the approach features a Jacobian-free Newton-Krylov solver. A main development in this study is the nonlinear elimination of the new-time particle variables (positions and velocities). Such nonlinear elimination, which we term particle enslavement, results in a nonlinear formulation with memory requirements comparable to those of a fluid computation, and affords us substantial freedom in regards to the particle orbit integrator. Numerical examples are presented that demonstrate the advertised properties of the scheme. In particular, long-time ion acoustic wave simulations show that numerical accuracy does not degrade even with very large implicit time steps, and that significant CPU gains are possible.Comment: 29 pages, 8 figures, submitted to Journal of Computational Physic

    Phosphorus recovery as struvite from farm, municipal and industrial waste: feedstock suitability, methods and pre-treatments

    Get PDF
    Global population growth requires intensification of agriculture, for which a sustainable supply of phosphorus (P) is essential. Since natural P reserves are diminishing, recovering P from wastes and residues is an increasingly attractive prospect, particularly as technical and economic potential in the area is growing. In addition to providing phosphorus for agricultural use, precipitation of P from waste residues and effluents lessens their nutrient loading prior to disposal. This paper critically reviews published methods for P recovery from waste streams (municipal, farm and industrial) with emphasis on struvite (MgNH4PO46H2O) crystallisation, including pre-treatments to maximise recovery. Based on compositional parameters of a range of wastes, a Feedstock Suitability Index (FSI) was developed as a guide to inform researchers and operators of the relative potential for struvite production from each waste

    Optical Light Curves of Supernovae

    Full text link
    Photometry is the most easily acquired information about supernovae. The light curves constructed from regular imaging provide signatures not only for the energy input, the radiation escape, the local environment and the progenitor stars, but also for the intervening dust. They are the main tool for the use of supernovae as distance indicators through the determination of the luminosity. The light curve of SN 1987A still is the richest and longest observed example for a core-collapse supernova. Despite the peculiar nature of this object, as explosion of a blue supergiant, it displayed all the characteristics of Type II supernovae. The light curves of Type Ib/c supernovae are more homogeneous, but still display the signatures of explosions in massive stars, among them early interaction with their circumstellar material. Wrinkles in the near-uniform appearance of thermonuclear (Type Ia) supernovae have emerged during the past decade. Subtle differences have been observed especially at near-infrared wavelengths. Interestingly, the light curve shapes appear to correlate with a variety of other characteristics of these supernovae. The construction of bolometric light curves provides the most direct link to theoretical predictions and can yield sorely needed constraints for the models. First steps in this direction have been already made.Comment: To be published in:"Supernovae and Gamma Ray Bursters", Lecture Notes in Physics (http://link.springer.de/series/lnpp

    New insights into the genetic etiology of Alzheimer's disease and related dementias

    Get PDF
    Characterization of the genetic landscape of Alzheimer's disease (AD) and related dementias (ADD) provides a unique opportunity for a better understanding of the associated pathophysiological processes. We performed a two-stage genome-wide association study totaling 111,326 clinically diagnosed/'proxy' AD cases and 677,663 controls. We found 75 risk loci, of which 42 were new at the time of analysis. Pathway enrichment analyses confirmed the involvement of amyloid/tau pathways and highlighted microglia implication. Gene prioritization in the new loci identified 31 genes that were suggestive of new genetically associated processes, including the tumor necrosis factor alpha pathway through the linear ubiquitin chain assembly complex. We also built a new genetic risk score associated with the risk of future AD/dementia or progression from mild cognitive impairment to AD/dementia. The improvement in prediction led to a 1.6- to 1.9-fold increase in AD risk from the lowest to the highest decile, in addition to effects of age and the APOE ε4 allele

    Risk profiles and one-year outcomes of patients with newly diagnosed atrial fibrillation in India: Insights from the GARFIELD-AF Registry.

    Get PDF
    BACKGROUND: The Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF) is an ongoing prospective noninterventional registry, which is providing important information on the baseline characteristics, treatment patterns, and 1-year outcomes in patients with newly diagnosed non-valvular atrial fibrillation (NVAF). This report describes data from Indian patients recruited in this registry. METHODS AND RESULTS: A total of 52,014 patients with newly diagnosed AF were enrolled globally; of these, 1388 patients were recruited from 26 sites within India (2012-2016). In India, the mean age was 65.8 years at diagnosis of NVAF. Hypertension was the most prevalent risk factor for AF, present in 68.5% of patients from India and in 76.3% of patients globally (P < 0.001). Diabetes and coronary artery disease (CAD) were prevalent in 36.2% and 28.1% of patients as compared with global prevalence of 22.2% and 21.6%, respectively (P < 0.001 for both). Antiplatelet therapy was the most common antithrombotic treatment in India. With increasing stroke risk, however, patients were more likely to receive oral anticoagulant therapy [mainly vitamin K antagonist (VKA)], but average international normalized ratio (INR) was lower among Indian patients [median INR value 1.6 (interquartile range {IQR}: 1.3-2.3) versus 2.3 (IQR 1.8-2.8) (P < 0.001)]. Compared with other countries, patients from India had markedly higher rates of all-cause mortality [7.68 per 100 person-years (95% confidence interval 6.32-9.35) vs 4.34 (4.16-4.53), P < 0.0001], while rates of stroke/systemic embolism and major bleeding were lower after 1 year of follow-up. CONCLUSION: Compared to previously published registries from India, the GARFIELD-AF registry describes clinical profiles and outcomes in Indian patients with AF of a different etiology. The registry data show that compared to the rest of the world, Indian AF patients are younger in age and have more diabetes and CAD. Patients with a higher stroke risk are more likely to receive anticoagulation therapy with VKA but are underdosed compared with the global average in the GARFIELD-AF. CLINICAL TRIAL REGISTRATION-URL: http://www.clinicaltrials.gov. Unique identifier: NCT01090362

    Outcomes of appropriate empiric combination versus monotherapy for pseudomonas aeruginosa bacteremia

    No full text
    10.1128/AAC.02235-12Antimicrobial Agents and Chemotherapy5731270-1274AMAC

    Characteristics of Successful and Unsuccessful Organization Development

    Full text link
    A comparison between 11 organizations with successful OD efforts and 14 organizations with unsuccessful efforts reveals characteristics identifiable with each category. Eight major clusters of characteristics served as the foci for the comparison: 1) the organization's environment, 2) the organization itself, 3) initial contact for the OD projects, 4) formal entry procedures and commitment, 5) data-gathering activities, 6) internal and 7) external changeagent characteristics, and 8) exit procedures. Results indicated an absence of single dimensions that are either essential or sufficient to distinguish between the successful and unsuccessful organizations. Three general areas, however, did serve to differentiate organizations in the two categories: 1. Organizations that are more open to and involved in adjusting to change are more likely to be successful in their OD effort than are those that are more stable and status-quo oriented. 2. Internal change agents who are more carefully selected, did not receive training prior to the current OD efforts, and who possess assessment-prescriptive skills are most evident in the successful organizations. 3. More specific interest in and greater commitment to the OD projects are associated with successful change. Implications for managers and consultants interested in applying these findings to increase the likelihood of success in OD projects are explored, with consideration given to the importance and alterability of each characteristic.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/67951/2/10.1177_002188637601200402.pd
    corecore