134 research outputs found
The cost of simplifying air travel when modeling disease spread
Background: Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. Methodology/Principal Findings: Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed) introductions of disease is small (<1 per day) but for a few routes this rate is greatly underestimated by the pipe model. Conclusions/Significance: If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed
Palliative care needs in patients hospitalized with heart failure (PCHF) study: rationale and design
Abstract Aims The primary aim of this study is to provide data to inform the design of a randomized controlled clinical trial (RCT) of a palliative care (PC) intervention in heart failure (HF). We will identify an appropriate study population with a high prevalence of PC needs defined using quantifiable measures. We will also identify which components a specific and targeted PC intervention in HF should include and attempt to define the most relevant trial outcomes. Methods An unselected, prospective, near-consecutive, cohort of patients admitted to hospital with acute decompensated HF will be enrolled over a 2-year period. All potential participants will be screened using B-type natriuretic peptide and echocardiography, and all those enrolled will be extensively characterized in terms of their HF status, comorbidity, and PC needs. Quantitative assessment of PC needs will include evaluation of general and disease-specific quality of life, mood, symptom burden, caregiver burden, and end of life care. Inpatient assessments will be performed and after discharge outpatient assessments will be carried out every 4 months for up to 2.5 years. Participants will be followed up for a minimum of 1 year for hospital admissions, and place and cause of death. Methods for identifying patients with HF with PC needs will be evaluated, and estimates of healthcare utilisation performed. Conclusion By assessing the prevalence of these needs, describing how these needs change over time, and evaluating how best PC needs can be identified, we will provide the foundation for designing an RCT of a PC intervention in HF
Randomized trial comparing proactive, high-dose versus reactive, low-dose intravenous iron supplementation in hemodialysis (PIVOTAL) : Study design and baseline data
Background: Intravenous (IV) iron supplementation is a standard maintenance treatment for hemodialysis (HD) patients, but the optimum dosing regimen is unknown. Methods: PIVOTAL (Proactive IV irOn Therapy in hemodiALysis patients) is a multicenter, open-label, blinded endpoint, randomized controlled (PROBE) trial. Incident HD adults with a serum ferritin 700 μg/L and/or TSAT ≥40%) or a reactive, low-dose IV iron arm (iron sucrose administered if ferritin <200 μg/L or TSAT < 20%). We hypothesized that proactive, high-dose IV iron would be noninferior to reactive, low-dose IV iron for the primary outcome of first occurrence of nonfatal myocardial infarction (MI), nonfatal stroke, hospitalization for heart failure or death from any cause. If noninferiority is confirmed with a noninferiority limit of 1.25 for the hazard ratio of the proactive strategy relative to the reactive strategy, a test for superiority will be carried out. Secondary outcomes include infection-related endpoints, ESA dose requirements, and quality-of-life measures. As an event-driven trial, the study will continue until at least 631 primary outcome events have accrued, but the expected duration of follow-up is 2-4 years. Results: Of the 2,589 patients screened across 50 UK sites, 2,141 (83%) were randomized. At baseline, 65.3% were male, the median age was 65 years, and 79% were white. According to eligibility criteria, all patients were on ESA at screening. Prior stroke and MI were present in 8 and 9% of the cohort, respectively, and 44% of patients had diabetes at baseline. Baseline data for the randomized cohort were generally concordant with recent data from the UK Renal Registry. Conclusions: PIVOTAL will provide important information about the optimum dosing of IV iron in HD patients representative of usual clinical practice. Trial Registration: EudraCT number: 2013-002267-25.Peer reviewedFinal Published versio
Explosions of water clusters in intense laser fields
Energetic, highly-charged oxygen ions, (), are copiously
produced upon laser field-induced disassembly of highly-charged water clusters,
and , 60, that are formed by seeding high-pressure
helium or argon with water vapor. clusters (n40000) formed under
similar experimental conditions are found undergo disassembly in the Coulomb
explosion regime, with the energies of ions showing a
dependence. Water clusters, which are argued to be considerably smaller in
size, should also disassemble in the same regime, but the energies of fragment
O ions are found to depend linearly on which, according to
prevailing wisdom, ought to be a signature of hydrodynamic expansion that is
expected of much larger clusters. The implication of these observations on our
understanding of the two cluster explosion regimes, Coulomb explosion and
hydrodynamic expansion, is discussed. Our results indicate that charge state
dependences of ion energy do not constitute an unambiguous experimental
signature of cluster explosion regime.Comment: Submitted to Phys. Rev.
Intravenous iron and SGLT2 inhibitors in iron-deficient patients with heart failure and reduced ejection fraction
Aims:
To explore the potential interaction between use of SGLT2 inhibitors and the increase in haemoglobin in patients randomized to intravenous iron or the control group in the IRONMAN (Effectiveness of Intravenous Iron Treatment versus Standard Care in Patients with Heart Failure and Iron Deficiency) trial.
Methods and results:
This was a post hoc exploratory analysis of the IRONMAN trial which randomized patients with heart failure, a left ventricular ejection fraction (LVEF) ≤ 45% and iron deficiency (transferrin saturation <20% or ferritin <100 μg/L) to open label intravenous ferric derisomaltose or usual care. Of the 1137 randomized patients, 29 (2.6%) were taking an SGLT2 inhibitor at baseline. The mean (SD) change in haemoglobin from baseline at 4 weeks in those taking an SGLT2 inhibitor at baseline was 1.3 (1.2) g/dL in patients randomized to ferric derisomaltose and 0.1 (0.7) g/dL in the usual care group; between-group difference = 1.0 g/dL (95% CI 0.1, 1.8). The equivalent numbers in the no SGLT2 inhibitor group were 0.6 (0.9) g/dL in those randomized to ferric derisomaltose and 0.1 (0.8) g/dL in the usual care group; between-group difference = 0.4 g/dL (95% CI 0.3, 1.6); interaction P value = 0.10. No patient receiving an SGLT2 inhibitor at baseline developed polycythaemia during follow-up (defined as haemoglobin >16.5 g/dL [men] or >16 g/dL [women]).
Conclusions:
In the IRONMAN trial, there was a trend to a greater increase in haemoglobin with ferric derisomaltose in iron-deficient patients taking an SGLT2 inhibitor at baseline, as compared with those not taking one
Gravitational Coupling and Dynamical Reduction of The Cosmological Constant
We introduce a dynamical model to reduce a large cosmological constant to a
sufficiently small value. The basic ingredient in this model is a distinction
which has been made between the two unit systems used in cosmology and particle
physics. We have used a conformal invariant gravitational model to define a
particular conformal frame in terms of large scale properties of the universe.
It is then argued that the contributions of mass scales in particle physics to
the vacuum energy density should be considered in a different conformal frame.
In this manner, a decaying mechanism is presented in which the conformal factor
appears as a dynamical field and plays a key role to relax a large effective
cosmological constant. Moreover, we argue that this model also provides a
possible explanation for the coincidence problem.Comment: To appear in GR
Dark Energy and Gravity
I review the problem of dark energy focusing on the cosmological constant as
the candidate and discuss its implications for the nature of gravity. Part 1
briefly overviews the currently popular `concordance cosmology' and summarises
the evidence for dark energy. It also provides the observational and
theoretical arguments in favour of the cosmological constant as the candidate
and emphasises why no other approach really solves the conceptual problems
usually attributed to the cosmological constant. Part 2 describes some of the
approaches to understand the nature of the cosmological constant and attempts
to extract the key ingredients which must be present in any viable solution. I
argue that (i)the cosmological constant problem cannot be satisfactorily solved
until gravitational action is made invariant under the shift of the matter
lagrangian by a constant and (ii) this cannot happen if the metric is the
dynamical variable. Hence the cosmological constant problem essentially has to
do with our (mis)understanding of the nature of gravity. Part 3 discusses an
alternative perspective on gravity in which the action is explicitly invariant
under the above transformation. Extremizing this action leads to an equation
determining the background geometry which gives Einstein's theory at the lowest
order with Lanczos-Lovelock type corrections. (Condensed abstract).Comment: Invited Review for a special Gen.Rel.Grav. issue on Dark Energy,
edited by G.F.R.Ellis, R.Maartens and H.Nicolai; revtex; 22 pages; 2 figure
- …