350 research outputs found

    Experimental conditions affect the outcome of Plasmodium falciparum platelet-mediated clumping assays

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Platelet-mediated clumping of <it>Plasmodium falciparum</it>-infected erythrocytes (IE) is a parasite adhesion phenotype that has been associated with severe malaria in some, but not all, field isolate studies. A variety of experimental conditions have been used to study clumping <it>in vitro</it>, with substantial differences in parasitaemia (Pt), haematocrit (Ht), and time of reaction between studies. It is unknown whether these experimental variables affect the outcome of parasite clumping assays.</p> <p>Methods</p> <p>The effects of Pt (1, 4 and 12%), Ht (2, 5 and 10%) and time (15 min, 30 min, 1 h, 2 h) on the clumping of <it>P. falciparum </it>clone HB3 were examined. The effects of platelet freshness and parasite maturity were also studied.</p> <p>Results</p> <p>At low Ht (2%), the Pt of the culture has a large effect on clumping, with significantly higher clumping occurring at 12% Pt (mean 47% of IE in clumps) compared to 4% Pt (mean 26% IE in clumps) or 1% Pt (mean 7% IE in clumps) (ANOVA, p = 0.0004). Similarly, at low Pt (1%), the Ht of the culture has a large effect on clumping, with significantly higher clumping occurring at 10% Ht (mean 62% IE in clumps) compared to 5% Ht (mean 25% IE in clumps) or 2% Ht (mean 10% IE in clumps) (ANOVA, p = 0.0004). Combinations of high Ht and high Pt were impractical because of the difficulty assessing clumping in densely packed IE and the rapid formation of enormous clumps that could not be counted accurately. There was no significant difference in clumping when fresh platelets were used compared to platelets stored at 4°C for 10 days. Clumping was a property of mature pigmented-trophozoites and schizonts but not ring stage parasites.</p> <p>Conclusion</p> <p>The Pt and Ht at which <it>in vitro </it>clumping assays are set up have a profound effect on the outcome. All previous field isolate studies on clumping and malaria severity suffer from potential problems in experimental design and methodology. Future studies of clumping should use standardized conditions and control for Pt, and should take into account the limitations and variability inherent in the assay.</p

    Measurement of the inclusive and dijet cross-sections of b-jets in pp collisions at sqrt(s) = 7 TeV with the ATLAS detector

    Get PDF
    The inclusive and dijet production cross-sections have been measured for jets containing b-hadrons (b-jets) in proton-proton collisions at a centre-of-mass energy of sqrt(s) = 7 TeV, using the ATLAS detector at the LHC. The measurements use data corresponding to an integrated luminosity of 34 pb^-1. The b-jets are identified using either a lifetime-based method, where secondary decay vertices of b-hadrons in jets are reconstructed using information from the tracking detectors, or a muon-based method where the presence of a muon is used to identify semileptonic decays of b-hadrons inside jets. The inclusive b-jet cross-section is measured as a function of transverse momentum in the range 20 < pT < 400 GeV and rapidity in the range |y| < 2.1. The bbbar-dijet cross-section is measured as a function of the dijet invariant mass in the range 110 < m_jj < 760 GeV, the azimuthal angle difference between the two jets and the angular variable chi in two dijet mass regions. The results are compared with next-to-leading-order QCD predictions. Good agreement is observed between the measured cross-sections and the predictions obtained using POWHEG + Pythia. MC@NLO + Herwig shows good agreement with the measured bbbar-dijet cross-section. However, it does not reproduce the measured inclusive cross-section well, particularly for central b-jets with large transverse momenta.Comment: 10 pages plus author list (21 pages total), 8 figures, 1 table, final version published in European Physical Journal

    A simulation model approach to analysis of the business case for eliminating health care disparities

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Purchasers can play an important role in eliminating racial and ethnic disparities in health care. A need exists to develop a compelling "business case" from the employer perspective to put, and keep, the issue of racial/ethnic disparities in health care on the quality improvement agenda for health plans and providers.</p> <p>Methods</p> <p>To illustrate a method for calculating an employer business case for disparity reduction and to compare the business case in two clinical areas, we conducted analyses of the direct (medical care costs paid by employers) and indirect (absenteeism, productivity) effects of eliminating known racial/ethnic disparities in mammography screening and appropriate medication use for patients with asthma. We used Markov simulation models to estimate the consequences, for defined populations of African-American employees or health plan members, of a 10% increase in HEDIS mammography rates or a 10% increase in appropriate medication use among either adults or children/adolescents with asthma.</p> <p>Results</p> <p>The savings per employed African-American woman aged 50-65 associated with a 10% increase in HEDIS mammography rate, from direct medical expenses and indirect costs (absenteeism, productivity) combined, was 50.Thefindingsforasthmaweremorefavorablefromanemployerpointofviewatapproximately50. The findings for asthma were more favorable from an employer point of view at approximately 1,660 per person if raising medication adherence rates in African-American employees or dependents by 10%.</p> <p>Conclusions</p> <p>For the employer business case, both clinical scenarios modeled showed positive results. There is a greater potential financial gain related to eliminating a disparity in asthma medications than there is for eliminating a disparity in mammography rates.</p

    Jet energy measurement with the ATLAS detector in proton-proton collisions at root s=7 TeV

    Get PDF
    The jet energy scale and its systematic uncertainty are determined for jets measured with the ATLAS detector at the LHC in proton-proton collision data at a centre-of-mass energy of √s = 7TeV corresponding to an integrated luminosity of 38 pb-1. Jets are reconstructed with the anti-kt algorithm with distance parameters R=0. 4 or R=0. 6. Jet energy and angle corrections are determined from Monte Carlo simulations to calibrate jets with transverse momenta pT≄20 GeV and pseudorapidities {pipe}η{pipe}<4. 5. The jet energy systematic uncertainty is estimated using the single isolated hadron response measured in situ and in test-beams, exploiting the transverse momentum balance between central and forward jets in events with dijet topologies and studying systematic variations in Monte Carlo simulations. The jet energy uncertainty is less than 2. 5 % in the central calorimeter region ({pipe}η{pipe}<0. 8) for jets with 60≀pT<800 GeV, and is maximally 14 % for pT<30 GeV in the most forward region 3. 2≀{pipe}η{pipe}<4. 5. The jet energy is validated for jet transverse momenta up to 1 TeV to the level of a few percent using several in situ techniques by comparing a well-known reference such as the recoiling photon pT, the sum of the transverse momenta of tracks associated to the jet, or a system of low-pT jets recoiling against a high-pT jet. More sophisticated jet calibration schemes are presented based on calorimeter cell energy density weighting or hadronic properties of jets, aiming for an improved jet energy resolution and a reduced flavour dependence of the jet response. The systematic uncertainty of the jet energy determined from a combination of in situ techniques is consistent with the one derived from single hadron response measurements over a wide kinematic range. The nominal corrections and uncertainties are derived for isolated jets in an inclusive sample of high-pT jets. Special cases such as event topologies with close-by jets, or selections of samples with an enhanced content of jets originating from light quarks, heavy quarks or gluons are also discussed and the corresponding uncertainties are determined. © 2013 CERN for the benefit of the ATLAS collaboration

    A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems

    Get PDF
    Background: Implementing new practices requires changes in the behaviour of relevant actors, and this is facilitated by understanding of the determinants of current and desired behaviours. The Theoretical Domains Framework (TDF) was developed by a collaboration of behavioural scientists and implementation researchers who identified theories relevant to implementation and grouped constructs from these theories into domains. The collaboration aimed to provide a comprehensive, theory-informed approach to identify determinants of behaviour. The first version was published in 2005, and a subsequent version following a validation exercise was published in 2012. This guide offers practical guidance for those who wish to apply the TDF to assess implementation problems and support intervention design. It presents a brief rationale for using a theoretical approach to investigate and address implementation problems, summarises the TDF and its development, and describes how to apply the TDF to achieve implementation objectives. Examples from the implementation research literature are presented to illustrate relevant methods and practical considerations. Methods: Researchers from Canada, the UK and Australia attended a 3-day meeting in December 2012 to build an international collaboration among researchers and decision-makers interested in the advancing use of the TDF. The participants were experienced in using the TDF to assess implementation problems, design interventions, and/or understand change processes. This guide is an output of the meeting and also draws on the a uthors' collective experience. Examples from the implementation research literature judged by authors to be representative of specific applications of the TDF are included in this guide. Results: We explain and illustrate methods, with a focus on qualitative approaches, for selecting and specifying target behaviours key to implementation, selecting the study design, deciding the sampling strategy, developing study materials, collecting and analysing data, and reporting findings of TDF-based studies. Areas for development include methods for triangulating data, e.g. from interviews, questionnaires and observation and methods for designing interventions based on TDF-based problem analysis. Conclusions: We offer this guide to the implementation community to assist in the application of the TDF to achieve implementation objectives. Benefits of using the TDF include the provision of a theoretical basis for implementation studies, good coverage of potential reasons for slow diffusion of evidence into practice and a method for progressing from theory-based investigation to intervention

    The Concise Guide to PHARMACOLOGY 2015/16:Ligand-gated ion channels

    Get PDF
    The Concise Guide to PHARMACOLOGY 2015/16 provides concise overviews of the key properties of over 1750 human drug targets with their pharmacology, plus links to an open access knowledgebase of drug targets and their ligands (www.guidetopharmacology.org), which provides more detailed views of target and ligand properties. The full contents can be found at http://onlinelibrary.wiley.com/doi/10.1111/bph.13349/full. Ligand-gated ion channels are one of the eight major pharmacological targets into which the Guide is divided, with the others being: ligand-gated ion channels, voltage-gated ion channels, other ion channels, nuclear hormone receptors, catalytic receptors, enzymes and transporters. These are presented with nomenclature guidance and summary information on the best available pharmacological tools, alongside key references and suggestions for further reading. The Concise Guide is published in landscape format in order to facilitate comparison of related targets. It is a condensed version of material contemporary to late 2015, which is presented in greater detail and constantly updated on the website www.guidetopharmacology.org, superseding data presented in the previous Guides to Receptors & Channels and the Concise Guide to PHARMACOLOGY 2013/14. It is produced in conjunction with NC-IUPHAR and provides the official IUPHAR classification and nomenclature for human drug targets, where appropriate. It consolidates information previously curated and displayed separately in IUPHAR-DB and GRAC and provides a permanent, citable, point-in-time record that will survive database updates

    A supermatrix analysis of genomic, morphological, and paleontological data from crown Cetacea

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Cetacea (dolphins, porpoises, and whales) is a clade of aquatic species that includes the most massive, deepest diving, and largest brained mammals. Understanding the temporal pattern of diversification in the group as well as the evolution of cetacean anatomy and behavior requires a robust and well-resolved phylogenetic hypothesis. Although a large body of molecular data has accumulated over the past 20 years, DNA sequences of cetaceans have not been directly integrated with the rich, cetacean fossil record to reconcile discrepancies among molecular and morphological characters.</p> <p>Results</p> <p>We combined new nuclear DNA sequences, including segments of six genes (~2800 basepairs) from the functionally extinct Yangtze River dolphin, with an expanded morphological matrix and published genomic data. Diverse analyses of these data resolved the relationships of 74 taxa that represent all extant families and 11 extinct families of Cetacea. The resulting supermatrix (61,155 characters) and its sub-partitions were analyzed using parsimony methods. Bayesian and maximum likelihood (ML) searches were conducted on the molecular partition, and a molecular scaffold obtained from these searches was used to constrain a parsimony search of the morphological partition. Based on analysis of the supermatrix and model-based analyses of the molecular partition, we found overwhelming support for 15 extant clades. When extinct taxa are included, we recovered trees that are significantly correlated with the fossil record. These trees were used to reconstruct the timing of cetacean diversification and the evolution of characters shared by "river dolphins," a non-monophyletic set of species according to all of our phylogenetic analyses.</p> <p>Conclusions</p> <p>The parsimony analysis of the supermatrix and the analysis of morphology constrained to fit the ML/Bayesian molecular tree yielded broadly congruent phylogenetic hypotheses. In trees from both analyses, all Oligocene taxa included in our study fell outside crown Mysticeti and crown Odontoceti, suggesting that these two clades radiated in the late Oligocene or later, contra some recent molecular clock studies. Our trees also imply that many character states shared by river dolphins evolved in their oceanic ancestors, contradicting the hypothesis that these characters are convergent adaptations to fluvial habitats.</p
    • 

    corecore