290 research outputs found

    The Fast Multipole Method and Point Dipole Moment Polarizable Force Fields

    Full text link
    We present an implementation of the fast multipole method for computing coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N)O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of FMM combined with state-of-the-art chemical models in molecular dynamical systems.Comment: 11 pages, 8 figures, accepted by J. Chem. Phy

    A New Estimate of the Hubble Time with Improved Modeling of Gravitational Lenses

    Full text link
    This paper examines free-form modeling of gravitational lenses using Bayesian ensembles of pixelated mass maps. The priors and algorithms from previous work are clarified and significant technical improvements are made. Lens reconstruction and Hubble Time recovery are tested using mock data from simple analytic models and recent galaxy-formation simulations. Finally, using published data, the Hubble Time is inferred through the simultaneous reconstruction of eleven time-delay lenses. The result is H_0^{-1}=13.7^{+1.8}_{-1.0} Gyr.Comment: 24 pages, 9 figures. Accepted to Ap

    An Optimizing Symbolic Algebra Approach for Generating Fast Multipole Method Operators

    Full text link
    We have developed a symbolic algebra approach to automatically produce, verify, and optimize computer code for the Fast Multipole Method (FMM) operators. This approach allows for flexibility in choosing a basis set and kernel, and can generate computer code for any expansion order in multiple languages. The procedure is implemented in the publicly available Python program Mosaic. Optimizations performed at the symbolic level through algebraic manipulations significantly reduce the number of mathematical operations compared with a straightforward implementation of the equations. We find that the optimizer is able to eliminate 20-80% of the floating-point operations and for the expansion orders p10p \le 10 it changes the observed scaling properties. We present our approach using three variants of the operators with the Cartesian basis set for the harmonic potential kernel 1/r1/r, including the use of totally symmetric and traceless multipole tensors.Comment: Updated to final version submitted to Computer Physics Communications. Accepted on 20 November 201

    Gravitational lens recovery with glass: measuring the mass profile and shape of a lens

    Get PDF
    We use a new non-parametric gravitational modelling tool - glass - to determine what quality of data (strong lensing, stellar kinematics, and/or stellar masses) are required to measure the circularly averaged mass profile of a lens and its shape. glass uses an underconstrained adaptive grid of mass pixels to model the lens, searching through thousands of models to marginalize over model uncertainties. Our key findings are as follows: (i) for pure lens data, multiple sources with wide redshift separation give the strongest constraints as this breaks the well-known mass-sheet or steepness degeneracy; (ii) a single quad with time delays also performs well, giving a good recovery of both the mass profile and its shape; (iii) stellar masses - for lenses where the stars dominate the central potential - can also break the steepness degeneracy, giving a recovery for doubles almost as good as having a quad with time-delay data, or multiple source redshifts; (iv) stellar kinematics provide a robust measure of the mass at the half-light radius of the stars r1/2 that can also break the steepness degeneracy if the Einstein radius rE ≠ r1/2; and (v) if rE∼r1/2, then stellar kinematic data can be used to probe the stellar velocity anisotropy β - an interesting quantity in its own right. Where information on the mass distribution from lensing and/or other probes becomes redundant, this opens up the possibility of using strong lensing to constrain cosmological model

    Lessons from a blind study of simulated lenses: image reconstructions do not always reproduce true convergence

    Full text link
    In the coming years, strong gravitational lens discoveries are expected to increase in frequency by two orders of magnitude. Lens-modelling techniques are being developed to prepare for the coming massive influx of new lens data, and blind tests of lens reconstruction with simulated data are needed for validation. In this paper we present a systematic blind study of a sample of 15 simulated strong gravitational lenses from the EAGLE suite of hydrodynamic simulations. We model these lenses with a free-form technique and evaluate reconstructed mass distributions using criteria based on shape, orientation, and lensed image reconstruction. Especially useful is a lensing analogue of the Roche potential in binary star systems, which we call the lensing Roche potential\textit{lensing Roche potential}. This we introduce in order to factor out the well-known problem of steepness or mass-sheet degeneracy. Einstein radii are on average well recovered with a relative error of 5%{\sim}5\% for quads and 25%{\sim}25\% for doubles; the position angle of ellipticity is on average also reproduced well up to ±10\pm10^{\circ}, but the reconstructed mass maps tend to be too round and too shallow. It is also easy to reproduce the lensed images, but optimising on this criterion does not guarantee better reconstruction of the mass distribution.Comment: 20 pages, 12 figures. Published in MNRAS. Agrees with published versio

    Glycaemic control targets after traumatic brain injury: a systematic review and meta-analysis.

    Get PDF
    BACKGROUND: Optimal glycaemic targets in traumatic brain injury (TBI) remain unclear. We performed a systematic review and meta-analysis of randomised controlled trials (RCTs) comparing intensive with conventional glycaemic control in TBI requiring admission to an intensive care unit (ICU). METHODS: We systematically searched MEDLINE, EMBASE and the Cochrane Central Register of Controlled Trials to November 2016. Outcomes of interest included ICU and in-hospital mortality, poor neurological outcome, the incidence of hypoglycaemia and infective complications. Data were analysed by pairwise random effects models with secondary analysis of differing levels of conventional glycaemic control. RESULTS: Ten RCTs, involving 1066 TBI patients were included. Three studies were conducted exclusively in a TBI population, whereas in seven trials, the TBI population was a sub-cohort of a mixed neurocritical or general ICU population. Glycaemic targets with intensive control ranged from 4.4 to 6.7 mmol/L, while conventional targets aimed to keep glucose levels below thresholds of 8.4-12 mmol/L. Conventional versus intensive control showed no association with ICU or hospital mortality (relative risk (RR) (95% CI) 0.93 (0.68-1.27), P = 0.64 and 1.07 (0.84-1.36), P = 0.62, respectively). The risk of a poor neurological outcome was higher with conventional control (RR (95% CI) = 1.10 (1.001-1.24), P = 0.047). However, severe hypoglycaemia occurred less frequently with conventional control (RR (95% CI) = 0.22 (0.09-0.52), P = 0.001). CONCLUSIONS: This meta-analysis of intensive glycaemic control shows no association with reduced mortality in TBI. Intensive glucose control showed a borderline significant reduction in the risk of poor neurological outcome, but markedly increased the risk of hypoglycaemia. These contradictory findings should motivate further research
    corecore