120 research outputs found

    Chandra Observation of the Cluster Environment of a WAT Radio Source in Abell 1446

    Full text link
    Wide-angle tail (WAT) radio sources are often found in the centers of galaxy clusters where intracluster medium (ICM) ram pressure may bend the lobes into their characteristic C-shape. We examine the low redshift (z=0.1035) cluster Abell 1446, host to the WAT radio source 1159+583. The cluster exhibits possible evidence for a small-scale cluster-subcluster merger as a cause of the WAT radio source morphology. This evidence includes the presence of temperature and pressure substructure along the line that bisects the WAT as well as a possible wake of stripped interstellar material or a disrupted cool core to the southeast of the host galaxy. A filament to the north may represent cool, infalling gas that's contributing to the WAT bending while spectroscopically determined redshifts of member galaxies may indicate some component of a merger occurring along the line-of-sight. The WAT model of high flow velocity and low lobe density is examined as another scenario for the bending of 1159+583. It has been argued that such a model would allow the ram pressure due to the galaxy's slow motion through the ICM to shape the WAT source. A temperature profile shows that the cluster is isothermal (kT= 4.0 keV) in a series of annuli reaching a radius of 400 kpc. There is no evidence of an ongoing cooling flow. Temperature, abundance, pressure, density, and mass profiles, as well as two-dimensional maps of temperature and pressure are presented.Comment: 40 AASTeX pages including 15 postscript figures; accepted for publication in Ap

    Nationally Representative Estimates of Serum Testosterone Concentration in Never-Smoking, Lean Men Without Aging-Associated Comorbidities

    Full text link
    Context Testosterone deficiency prevalence increases with age, comorbidities, and obesity. Objective To inform clinical guidelines for testosterone deficiency management and development of targets for nonpharmacologic intervention trials for these men, we determined serum testosterone in never-smoking, lean men without select comorbidities in nationally representative surveys. Design Setting Participants We used cross-sectional data for never-smoking, lean men ≥20 years without diabetes, myocardial infarction, congestive heart failure, stroke, or cancer, without use of hormone-influencing medications, and participated in morning sessions of National Health and Nutrition Examination Survey (NHANES) III (phase I 1988-1991) or continuous NHANES (1999-2004). By age, we determined median total testosterone (ng/mL) measured previously by a Food and Drug Administration-approved immunoassay and median estimated free testosterone concentration. Results In NHANES III, in never-smoking, lean men without comorbidities, median (25th, 75th percentile) testosterone was 4% to 9% higher than all men-20 to 39 years: 6.24 (5.16, 7.51), 40 to 59: 5.37 (3.83, 6.49), and ≥60: 4.61 (4.01, 5.18). In continuous NHANES, in never-smoking, lean men without comorbidities, levels were 13% to 24% higher than all men-20 to 39 years: 6.26 (5.32, 7.27), 40 to 59: 5.86 (4.91, 6.55), and ≥60: 4.22 (3.74, 5.73). In never-smoking, lean men without comorbidities, median estimated free testosterone was similar to (NHANES III) or slightly higher than (continuous NHANES) in all men. Conclusions These nationally representative data document testosterone levels (immunoassay) in never-smoking, lean men without select comorbidities 30 and 15 to 20 years ago. This information can be incorporated into guidelines for testosterone deficiency management and used to develop targets for nonpharmacologic intervention trials for testosterone deficiency

    Quality of care for hypertension in the United States

    Get PDF
    BACKGROUND: Despite heavy recent emphasis on blood pressure (BP) control, many patients fail to meet widely accepted goals. While access and adherence to therapy certainly play a role, another potential explanation is poor quality of essential care processes (QC). Yet little is known about the relationship between QC and BP control. METHODS: We assessed QC in 12 U.S. communities by reviewing the medical records of a randomly selected group of patients for the two years preceding our study. We included patients with either a diagnosis of hypertension or two visits with BPs of ≥140/90 in their medical records. We used 28 process indicators based on explicit evidence to assess QC. The indicators covered a broad spectrum of care and were developed through a modified Delphi method. We considered patients who received all indicated care to have optimal QC. We defined control of hypertension as BP < 140/90 in the most recent reading. RESULTS: Of 1,953 hypertensive patients, only 57% received optimal care and 42% had controlled hypertension. Patients who had received optimal care were more likely to have their BP under control at the end of the study (45% vs. 35%, p = .0006). Patients were more likely to receive optimal care if they were over age 50 (76% vs. 63%, p < .0001), had diabetes (77% vs. 71%, p = .0038), coronary artery disease (87% vs. 69%, p < .0001), or hyperlipidemia (80% vs. 68%, p < .0001), and did not smoke (73% vs. 66%, p = .0005). CONCLUSIONS: Higher QC for hypertensive patients is associated with better BP control. Younger patients without cardiac risk factors are at greatest risk for poor care. Quality measurement systems like the one presented in this study can guide future quality improvement efforts

    Incorporating statistical uncertainty in the use of physician cost profiles

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Physician cost profiles (also called efficiency or economic profiles) compare the costs of care provided by a physician to his or her peers. These profiles are increasingly being used as the basis for policy applications such as tiered physician networks. Tiers (low, average, high cost) are currently defined by health plans based on percentile cut-offs which do not account for statistical uncertainty. In this paper we compare the percentile cut-off method to another method, using statistical testing, for identifying high-cost or low-cost physicians.</p> <p>Methods</p> <p>We created a claims dataset of 2004-2005 data from four Massachusetts health plans. We employed commercial software to create episodes of care and assigned responsibility for each episode to the physician with the highest proportion of professional costs. A physicians' cost profile was the ratio of the sum of observed costs divided by the sum of expected costs across all assigned episodes. We discuss a new method of measuring standard errors of physician cost profiles which can be used in statistical testing. We then assigned each physician to one of three cost categories (low, average, or high cost) using two methods, percentile cut-offs and a t-test (p-value ≤ 0.05), and assessed the level of disagreement between the two methods.</p> <p>Results</p> <p>Across the 8689 physicians in our sample, 29.5% of physicians were assigned a different cost category when comparing the percentile cut-off method and the t-test. This level of disagreement varied across specialties (17.4% gastroenterology to 45.8% vascular surgery).</p> <p>Conclusions</p> <p>Health plans and other payers should incorporate statistical uncertainty when they use physician cost-profiles to categorize physicians into low or high-cost tiers.</p

    What Are the Public Health Effects of Direct-to-Consumer Drug Advertising?

    Get PDF
    Background to the debate: Only two industrialized countries, the United States and New Zealand, allow direct-to-consumer advertising (DTCA) of prescription medicines, although New Zealand is planning a ban [ 1]. The challenge for these governments is ensuring that DTCA is more beneficial than harmful. Proponents of DTCA argue that it helps to inform the public about available treatments and stimulates appropriate use of drugs for high-priority illnesses (such as statin use in people with ischemic heart disease). Critics argue that the information in the adverts is often biased and misleading, and that DTCA raises prescribing costs without net evidence of health benefits

    Evidence-informed health policy 3 – Interviews with the directors of organizations that support the use of research evidence

    Get PDF
    Background: Previous surveys of organizations that support the development of evidence-informed health policies have focused on organizations that produce clinical practice guidelines (CPGs) or undertake health technology assessments (HTAs). Only rarely have surveys focused at least in part on units that directly support the use of research evidence in developing health policy on an international, national, and state or provincial level (i.e., government support units, or GSUs) that are in some way successful or innovative or that support the use of research evidence in low- and middle-income countries (LMICs). Methods: We drew on many people and organizations around the world, including our project reference group, to generate a list of organizations to survey. We modified a questionnaire that had been developed originally by the Appraisal of Guidelines, Research and Evaluation in Europe (AGREE) collaboration and adapted one version of the questionnaire for organizations producing CPGs and HTAs, and another for GSUs. We sent the questionnaire by email to 176 organizations and followed up periodically with non-responders by email and telephone. Results: We received completed questionnaires from 152 (86%) organizations. More than one-half of the organizations (and particularly HTA agencies) reported that examples from other countries were helpful in establishing their organization. A higher proportion of GSUs than CPG-or HTA-producing organizations involved target users in the selection of topics or the services undertaken. Most organizations have few (five or fewer) full-time equivalent (FTE) staff. More than four-fifths of organizations reported providing panels with or using systematic reviews. GSUs tended to use a wide variety of explicit valuation processes for the research evidence, but none with the frequency that organizations producing CPGs, HTAs, or both prioritized evidence by its quality. Between one-half and two-thirds of organizations do not collect data systematically about uptake, and roughly the same proportions do not systematically evaluate their usefulness or impact in other ways. Conclusion: The findings from our survey, the most broadly based of its kind, both extend or clarify the applicability of the messages arising from previous surveys and related documentary analyses, such as how the 'principles of evidence-based medicine dominate current guideline programs' and the importance of collaborating with other organizations. The survey also provides a description of the history, structure, processes, outputs, and perceived strengths and weaknesses of existing organizations from which those establishing or leading similar organizations can draw

    Fermi Large Area Telescope Constraints on the Gamma-ray Opacity of the Universe

    Get PDF
    The Extragalactic Background Light (EBL) includes photons with wavelengths from ultraviolet to infrared, which are effective at attenuating gamma rays with energy above ~10 GeV during propagation from sources at cosmological distances. This results in a redshift- and energy-dependent attenuation of the gamma-ray flux of extragalactic sources such as blazars and Gamma-Ray Bursts (GRBs). The Large Area Telescope onboard Fermi detects a sample of gamma-ray blazars with redshift up to z~3, and GRBs with redshift up to z~4.3. Using photons above 10 GeV collected by Fermi over more than one year of observations for these sources, we investigate the effect of gamma-ray flux attenuation by the EBL. We place upper limits on the gamma-ray opacity of the Universe at various energies and redshifts, and compare this with predictions from well-known EBL models. We find that an EBL intensity in the optical-ultraviolet wavelengths as great as predicted by the "baseline" model of Stecker et al. (2006) can be ruled out with high confidence.Comment: 42 pages, 12 figures, accepted version (24 Aug.2010) for publication in ApJ; Contact authors: A. Bouvier, A. Chen, S. Raino, S. Razzaque, A. Reimer, L.C. Reye

    A New Role for Translation Initiation Factor 2 in Maintaining Genome Integrity

    Get PDF
    Escherichia coli translation initiation factor 2 (IF2) performs the unexpected function of promoting transition from recombination to replication during bacteriophage Mu transposition in vitro, leading to initiation by replication restart proteins. This function has suggested a role of IF2 in engaging cellular restart mechanisms and regulating the maintenance of genome integrity. To examine the potential effect of IF2 on restart mechanisms, we characterized its influence on cellular recovery following DNA damage by methyl methanesulfonate (MMS) and UV damage. Mutations that prevent expression of full-length IF2-1 or truncated IF2-2 and IF2-3 isoforms affected cellular growth or recovery following DNA damage differently, influencing different restart mechanisms. A deletion mutant (del1) expressing only IF2-2/3 was severely sensitive to growth in the presence of DNA-damaging agent MMS. Proficient as wild type in repairing DNA lesions and promoting replication restart upon removal of MMS, this mutant was nevertheless unable to sustain cell growth in the presence of MMS; however, growth in MMS could be partly restored by disruption of sulA, which encodes a cell division inhibitor induced during replication fork arrest. Moreover, such characteristics of del1 MMS sensitivity were shared by restart mutant priA300, which encodes a helicase-deficient restart protein. Epistasis analysis indicated that del1 in combination with priA300 had no further effects on cellular recovery from MMS and UV treatment; however, the del2/3 mutation, which allows expression of only IF2-1, synergistically increased UV sensitivity in combination with priA300. The results indicate that full-length IF2, in a function distinct from truncated forms, influences the engagement or activity of restart functions dependent on PriA helicase, allowing cellular growth when a DNA–damaging agent is present

    Gamma-ray and radio properties of six pulsars detected by the fermi large area telescope

    Get PDF
    We report the detection of pulsed γ-rays for PSRs J0631+1036, J0659+1414, J0742-2822, J1420-6048, J1509-5850, and J1718-3825 using the Large Area Telescope on board the Fermi Gamma-ray Space Telescope (formerly known as GLAST). Although these six pulsars are diverse in terms of their spin parameters, they share an important feature: their γ-ray light curves are (at least given the current count statistics) single peaked. For two pulsars, there are hints for a double-peaked structure in the light curves. The shapes of the observed light curves of this group of pulsars are discussed in the light of models for which the emission originates from high up in the magnetosphere. The observed phases of the γ-ray light curves are, in general, consistent with those predicted by high-altitude models, although we speculate that the γ-ray emission of PSR J0659+1414, possibly featuring the softest spectrum of all Fermi pulsars coupled with a very low efficiency, arises from relatively low down in the magnetosphere. High-quality radio polarization data are available showing that all but one have a high degree of linear polarization. This allows us to place some constraints on the viewing geometry and aids the comparison of the γ-ray light curves with high-energy beam models
    corecore