309 research outputs found

    An update on the BQCD Hybrid Monte Carlo program

    Full text link
    We present an update of BQCD, our Hybrid Monte Carlo program for simulating lattice QCD. BQCD is one of the main production codes of the QCDSF collaboration and is used by CSSM and in some Japanese finite temperature and finite density projects. Since the first publication of the code at Lattice 2010 the program has been extended in various ways. New features of the code include: dynamical QED, action modification in order to compute matrix elements by using Feynman-Hellman theory, more trace measurements, a more flexible integration scheme, polynomial filtering, term-splitting for RHMC, and a portable implementation of performance critical parts employing SIMD.Comment: Poster presented at the 35th International Symposium on Lattice Field Theory, Granada, Spain, 18-24 June 201

    Single flavour filtering for RHMC in BQCD

    Get PDF
    Filtering algorithms for two degenerate quark flavours have advanced to the point that, in 2+1 flavour simulations, the cost of the strange quark is significant compared with the light quarks. This makes efficient filtering algorithms for single flavour actions highly desirable, in particular when considering 1+1+1 flavour simulations for QED+QCD. Here we discuss methods for filtering the RHMC algorithm that are implemented within BQCD, an open-source Fortran program for Hybrid Monte Carlo simulations.Comment: 8 pages, 3 figures, proceedings of the 35th International Symposium on Lattice Field Theory, 18-24 June 2017, Granada, Spai

    Optimisations to Hybrid Monte Carlo for Lattice QCD

    Get PDF
    In Lattice Quantum Chromodynamics, we calculate physical quantities on a discrete 4D Euclidean lattice via expectation values, which take the form of path integrals. Due to the high dimensionality of these integrals, the standard technique for evaluating lattice expectation values is Monte Carlo; we generate configurations of gauge fields U and fermion fields distributed according to the lattice action S, then take a weighted average of the observable across the configurations. The most common method used to generate configurations is a Markov Chain technique called Hybrid Monte Carlo. While this technique is functional, it takes a lot of computational resources to generate configurations which are desirably close to the continuum theory. The object of this work is to investigate a variety of improvements over the basic Hybrid Monte Carlo method, and determine which combinations produce independent configurations at the lowest cost. We start by performing a systematic study of filtering for double-flavour simulations, comparing polynomial filtering to the common technique of mass filtering. We show that combining these two methods produces optimal speedup with minimal tuning of parameters, which can be a serious concern when multiple filters are involved. During this investigation, we used the novel technique of overlaid integrators for implementing multiple integration time scales, which expands the possible step-size choices. Next, we investigate improvements to single-flavour simulations, comparing polynomial filtering with a different method that we denote truncated ordered product RHMC. We obtain the best speedup when using truncation filters, but it is highly dependent on the truncation order chosen. To alleviate this problem, we apply a novel integration step-size tuning method called characteristic scale tuning which allows for step-sizes to be better tuned to the energy modes of the system. This improves the performance of our algorithms for a wide range of filter parameters, thus reducing the need to tune filter parameters. Finally, we extend our single-flavour techniques to Lattice QCD+QED simulations, which include electromagnetic effects via a photon field.Thesis (Ph.D.) -- University of Adelaide, School of Physical Sciences, 201

    Protoneutron stars within the Brueckner-Bethe-Goldstone theory

    Full text link
    We study the structure of newly born neutron stars (protoneutron stars) within the finite temperature Brueckner-Bethe-Goldstone theoretical approach including also hyperons. We find that for purely nucleonic stars both finite temperature and neutrino trapping reduce the value of the maximum mass. For hyperonic stars the effect is reversed, because neutrino trapping shifts the appearance of hyperons to larger baryon density and stiffens considerably the equation of state.Comment: 11 pages, 7 figures, submitted to Astronomy & Astrophysic

    MR guided high intensity focused ultrasound (MRgHIFU) for treating recurrent gynaecological tumours: a pilot feasibility study.

    Get PDF
    Objective To assess the feasibility of targeting recurrent gynaecological tumours with MR guided high intensity focused ultrasound (MRgHIFU).Methods 20 patients with recurrent gynaecological tumours were prospectively scanned on a Philips/Profound 3 T Achieva MR/ Sonalleve HIFU system. Gross tumour volume (GTV) and planning target volume (PTV) were delineated on T 2W and diffusion-weighted imaging (DWI). Achievable treatment volumes that (i) assumed bowel and/or urogenital tract preparation could be used to reduce risk of damage to organs-at-risk (TVoptimal), or (ii) assumed no preparations were possible (TVno-prep) were compared with PTV on virtual treatment plans. Patients were considered treatable if TVoptimal ≥ 50 % PTV.Results 11/20 patients (55%) were treatable if preparation strategies were used: nine had central pelvic recurrences, two had tumours in metastatic locations. Treatable volume ranged from 3.4 to 90.3 ml, representing 70 ± 17 % of PTVs. Without preparation, 6/20 (30%) patients were treatable (four central recurrences, two metastatic lesions). Limiting factors were disease beyond reach of the HIFU transducer, and bone obstructing tumour access. DWI assisted tumour outlining, but differences from T 2W imaging in GTV size (16.9 ± 23.0%) and PTV location (3.8 ± 2.8 mm in phase-encode direction) limited its use for treatment planning.Conclusions Despite variation in size and location within the pelvis, ≥ 50 % of tumour volumes were considered targetable in 55 % patients while avoiding adjacent critical structures. A prospective treatment study will assess safety and symptom relief in a second patient cohort.Advances in knowledge Target size, location and access make MRgHIFU a viable treatment modality for treating symptomatic recurrent gynaecological tumours within the pelvis

    On Virtual Displacement and Virtual Work in Lagrangian Dynamics

    Full text link
    The confusion and ambiguity encountered by students, in understanding virtual displacement and virtual work, is discussed in this article. A definition of virtual displacement is presented that allows one to express them explicitly for holonomic (velocity independent), non-holonomic (velocity dependent), scleronomous (time independent) and rheonomous (time dependent) constraints. It is observed that for holonomic, scleronomous constraints, the virtual displacements are the displacements allowed by the constraints. However, this is not so for a general class of constraints. For simple physical systems, it is shown that, the work done by the constraint forces on virtual displacements is zero. This motivates Lagrange's extension of d'Alembert's principle to system of particles in constrained motion. However a similar zero work principle does not hold for the allowed displacements. It is also demonstrated that d'Alembert's principle of zero virtual work is necessary for the solvability of a constrained mechanical problem. We identify this special class of constraints, physically realized and solvable, as {\it the ideal constraints}. The concept of virtual displacement and the principle of zero virtual work by constraint forces are central to both Lagrange's method of undetermined multipliers, and Lagrange's equations in generalized coordinates.Comment: 12 pages, 10 figures. This article is based on an earlier article physics/0410123. It includes new figures, equations and logical conten

    Nonlinear regularization techniques for seismic tomography

    Full text link
    The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, â„“2\ell_2 penalties are compared to so-called sparsity promoting â„“1\ell_1 and â„“0\ell_0 penalties, and a total variation penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an â„“2\ell_2 norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer â„“1\ell_1 damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple â„“2\ell_2 minimization (`Tikhonov regularization') which should be avoided. In some of our examples, the â„“0\ell_0 method produced notable artifacts. In addition we show how nonlinear â„“1\ell_1 methods for finding sparse models can be competitive in speed with the widely used â„“2\ell_2 methods, certainly under noisy conditions, so that there is no need to shun â„“1\ell_1 penalizations.Comment: 23 pages, 7 figures. Typographical error corrected in accelerated algorithms (14) and (20
    • …
    corecore