87 research outputs found

    Virtual colonoscopy; real misses

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/72356/1/j.1572-0241.2003.08448.x.pd

    COST-EFFECTIVENESS OF AIRLINE DEFIBRILLATORS: IS PEACE OF MIND MORE IMPORTANT THAN SAVING LIVES?

    Full text link
    Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/74922/1/j.1524-4733.2001.40201-5.x.pd

    Do fall additions of salmon carcasses benefit food webs in experimental streams?

    Get PDF
    This is the postprint version of the article. The printed version of the article can be found here: http://www.springerlink.com/content/jl0122219v283124/Research showing that salmon carcasses support the productivity and biodiversity of aquatic and riparian ecosystems has been conducted over a variety of spatial and temporal scales. In some studies, carcasses were manipulated in a single pulse or loading rate or manipulations occurred during summer and early fall, rather than simulating the natural dynamic of an extended spawning period, a gradient of loading rates, or testing carcass effects in late fall-early winter when some salmon stocks in the US Pacific Northwest spawn. To address these discrepancies, we manipulated salmon carcass biomass in 16 experimental channels located in the sunlit floodplain of the Cedar River, WA, USA between mid-September and mid-December, 2006. Total carcass loads ranged from 0–4.0 kg/m2 (0, 0.001, 0.01, 0.1, 0.5, 1.0, 2.0 and 4.0 kg/m2, n = 2 per treatment) and were added to mimic the temporal dynamic of an extended spawning period. We found little evidence that carcasses influenced primary producer biomass or fish growth; however, nutrients and some primary consumer populations increased with loading rate. These effects varied through time, however. We hypothesize that the variable effects of carcasses were a result of ambient abiotic condition, such as light, temperature and disturbance that constrained trophic response. There was some evidence to suggest peak responses for primary producers and consumers occurred at a loading rate of *1.0–2.0 kg/m2, which was similar to other experimental studies conducted during summer

    Radio-Excess IRAS Galaxies: PMN/FSC Sample Selection

    Full text link
    A sample of 178 extragalactic objects is defined by correlating the 60 micron IRAS FSC with the 5 GHz PMN catalog. Of these, 98 objects lie above the radio/far-infrared relation for radio-quiet objects. These radio-excess galaxies and quasars have a uniform distribution of radio excesses and appear to be a new population of active galaxies not present in previous radio/far-infrared samples. The radio-excess objects extend over the full range of far-infrared luminosities seen in extragalactic objects. Objects with small radio excesses are more likely to have far-infrared colors similar to starbursts, while objects with large radio excesses have far-infrared colors typical of pure AGN. Some of the most far-infrared luminous radio-excess objects have the highest far-infrared optical depths. These are good candidates to search for hidden broad line regions in polarized light or via near-infrared spectroscopy. Some low far-infrared luminosity radio-excess objects appear to derive a dominant fraction of their far-infrared emission from star formation, despite the dominance of the AGN at radio wavelengths. Many of the radio-excess objects have sizes likely to be smaller than the optical host, but show optically thin radio emission. We draw parallels between these objects and high radio luminosity Compact Steep-Spectrum (CSS) and GigaHertz Peaked-Spectrum (GPS) objects. Radio sources with these characteristics may be young AGN in which the radio activity has begun only recently. Alternatively, high central densities in the host galaxies may be confining the radio sources to compact sizes. We discuss future observations required to distinguish between these possibilities and determine the nature of radio-excess objects.Comment: Submitted to AJ. 44 pages, 11 figures. A version of the paper with higher quality figures is available from http://www.mso.anu.edu.au/~cdrake/PMNFSC/paperI

    The frequency of missed test results and associated treatment delays in a highly computerized health system

    Get PDF
    <p>Abstract</p> <p>Background:</p> <p>Diagnostic errors associated with the failure to follow up on abnormal diagnostic studies ("missed results") are a potential cause of treatment delay and a threat to patient safety. Few data exist concerning the frequency of missed results and associated treatment delays within the Veterans Health Administration (VA).</p> <p>Objective:</p> <p>The primary objective of the current study was to assess the frequency of missed results and resulting treatment delays encountered by primary care providers in VA clinics.</p> <p>Methods:</p> <p>An anonymous on-line survey of primary care providers was conducted as part of the health systems ongoing quality improvement programs. We collected information from providers concerning their clinical effort (e.g., number of clinic sessions, number of patient visits per session), number of patients with missed abnormal test results, and the number and types of treatment delays providers encountered during the two week period prior to administration of our survey.</p> <p>Results:</p> <p>The survey was completed by 106 out of 198 providers (54 percent response rate). Respondents saw and average of 86 patients per 2 week period. Providers encountered 64 patients with missed results during the two week period leading up to the study and 52 patients with treatment delays. The most common missed results included imaging studies (29 percent), clinical laboratory (22 percent), anatomic pathology (9 percent), and other (40 percent). The most common diagnostic delays were cancer (34 percent), endocrine problems (26 percent), cardiac problems (16 percent), and others (24 percent).</p> <p>Conclusion:</p> <p>Missed results leading to clinically important treatment delays are an important and likely underappreciated source of diagnostic error.</p

    Differences in Treatment Patterns and Outcomes of Acute Myocardial Infarction for Low- and High-Income Patients in 6 Countries

    Get PDF
    IMPORTANCE: Differences in the organization and financing of health systems may produce more or less equitable outcomes for advantaged vs disadvantaged populations. We compared treatments and outcomes of older high- and low-income patients across 6 countries. OBJECTIVE: To determine whether treatment patterns and outcomes for patients presenting with acute myocardial infarction differ for low- vs high-income individuals across 6 countries. DESIGN, SETTING, AND PARTICIPANTS: Serial cross-sectional cohort study of all adults aged 66 years or older hospitalized with acute myocardial infarction from 2013 through 2018 in the US, Canada, England, the Netherlands, Taiwan, and Israel using population-representative administrative data. EXPOSURES: Being in the top and bottom quintile of income within and across countries. MAIN OUTCOMES AND MEASURES: Thirty-day and 1-year mortality; secondary outcomes included rates of cardiac catheterization and revascularization, length of stay, and readmission rates. RESULTS: We studied 289 376 patients hospitalized with ST-segment elevation myocardial infarction (STEMI) and 843 046 hospitalized with non-STEMI (NSTEMI). Adjusted 30-day mortality generally was 1 to 3 percentage points lower for high-income patients. For instance, 30-day mortality among patients admitted with STEMI in the Netherlands was 10.2% for those with high income vs 13.1% for those with low income (difference, -2.8 percentage points [95% CI, -4.1 to -1.5]). One-year mortality differences for STEMI were even larger than 30-day mortality, with the highest difference in Israel (16.2% vs 25.3%; difference, -9.1 percentage points [95% CI, -16.7 to -1.6]). In all countries, rates of cardiac catheterization and percutaneous coronary intervention were higher among high- vs low-income populations, with absolute differences ranging from 1 to 6 percentage points (eg, 73.6% vs 67.4%; difference, 6.1 percentage points [95% CI, 1.2 to 11.0] for percutaneous intervention in England for STEMI). Rates of coronary artery bypass graft surgery for patients with STEMI in low- vs high-income strata were similar but for NSTEMI were generally 1 to 2 percentage points higher among high-income patients (eg, 12.5% vs 11.0% in the US; difference, 1.5 percentage points [95% CI, 1.3 to 1.8 ]). Thirty-day readmission rates generally also were 1 to 3 percentage points lower and hospital length of stay generally was 0.2 to 0.5 days shorter for high-income patients. CONCLUSIONS AND RELEVANCE: High-income individuals had substantially better survival and were more likely to receive lifesaving revascularization and had shorter hospital lengths of stay and fewer readmissions across almost all countries. Our results suggest that income-based disparities were present even in countries with universal health insurance and robust social safety net systems

    Implantable or External Defibrillators for Individuals at Increased Risk of Cardiac Arrest: Where Cost-Effectiveness Hits Fiscal Reality

    Get PDF
    Objcetives:  Implantable cardioverter defibrillators (ICDs) are highly effective at preventing cardiac arrest, but their availability is limited by high cost. Automated external defibrillators (AEDs) are likely to be less effective, but also less expensive. We used decision analysis to evaluate the clinical and economic trade-offs of AEDs, ICDs, and emergency medical services equipped with defibrillators (EMS-D) for reducing cardiac arrest mortality. Methods:  A Markov model was developed to compare the cost-effectiveness of three strategies in adults meeting entry criteria for the MADIT II Trial: strategy 1, individuals experiencing cardiac arrest are treated by EMS-D; strategy 2, individuals experiencing cardiac arrest are treated with an in-home AED; and strategy 3, individuals receive a prophylactic ICD. The model was then used to quantify the aggregate societal benefit of these three strategies under the conditions of a constrained federal budget. Results:  Compared with EMS-D, in-home AEDs produced a gain of 0.05 quality-adjusted life-years (QALYs) at an incremental cost of 5225(5225 (104,500 per QALY), while ICDs produced a gain of 0.90 QALYs at a cost of 114,660(114,660 (127,400 per QALY). For every 1millionspentondefibrillators,1.7additionalQALYsareproducedbypurchasingAEDs(9.6QALYs/1 million spent on defibrillators, 1.7 additional QALYs are produced by purchasing AEDs (9.6 QALYs/million) instead of ICDs (7.9 QALYs/$million). Results were most sensitive to defibrillator complication rates and effectiveness, defibrillator cost, and adults’ risk of cardiac arrest. Conclusions:  Both AEDs and ICDs reduce cardiac arrest mortality, but AEDs are significantly less expensive and less effective. If financial constraints were to lead to rationing of defibrillators, it might be preferable to provide more people with a less effective and less expensive intervention (in-home AEDs) instead of providing fewer people with a more effective and more costly intervention (ICDs).Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/74790/1/j.1524-4733.2006.00118.x.pd

    Physician Characteristics Associated With Ordering 4 Low-Value Screening Tests in Primary Care

    Get PDF
    Importance: Efforts to reduce low-value tests and treatments in primary care are often ineffective. These efforts typically target physicians broadly, most of whom order low-value care infrequently. Objectives: To measure physician-level use rates of 4 low-value screening tests in primary care to investigate the presence and characteristics of primary care physicians who frequently order low-value care. Design, Setting, and Participants: A retrospective cohort study was conducted using administrative health care claims collected between April 1, 2012, and March 31, 2016, in Ontario, Canada. This study measured use of 4 low-value screening tests-repeated dual-energy x-ray absorptiometry (DXA) scans, electrocardiograms (ECGs), Papanicolaou (Pap) tests, and chest radiographs (CXRs)-among low-risk outpatients rostered to a common cohort of primary care physicians. Exposures: Physician sex, years since medical school graduation, and primary care model. Main Outcomes and Measures: This study measured the number of tests to which a given physician ranked in the top quintile by ordering rate. The resulting cross-test score (range, 0-4) reflects a physician's propensity to order low-value care across screening tests. Physicians were then dichotomized into infrequent or isolated frequent users (score, 0 or 1, respectively) or generalized frequent users for 2 or more tests (score, ≥2). Results: The final sample consisted of 2394 primary care physicians (mean [SD] age, 51.3 [10.0] years; 50.2% female), who were predominantly Canadian medical school graduates (1701 [71.1%]), far removed from medical school graduation (median, 25.3 years; interquartile range, 17.3-32.3 years), and reimbursed via fee-for-service in a family health group (1130 [47.2%]), far removed from medical school graduation (median, 25.3 years; interquartile range, 17.3-32.3 years), and reimbursed via fee-for-service in a family health group (1130 [47.2%). They ordered 302 509 low-value screening tests (74 167 DXA scans, 179 855 ECGs, 19 906 Pap tests, and 28 581 CXRs) after 3 428 557 ordering opportunities. Within the cohort, generalized frequent users represented 18.4% (441 of 2394) of physicians but ordered 39.2% (118 665 of 302 509) of all low-value screening tests. Physicians who were male (odds ratio, 1.29; 95% CI, 1.01-1.64), further removed from medical school graduation (odds ratio, 1.03; 95% CI, 1.02-1.04), or in an enhanced fee-for-service payment model (family health group) vs a capitated payment model (family health team) (odds ratio, 2.04; 95% CI, 1.42-2.94) had increased odds of being generalized frequent users. Conclusions and Relevance: This study identified a group of primary care physicians who frequently ordered low-value screening tests. Tailoring future interventions to these generalized frequent users might be an effective approach to reducing low-value care

    Cost-effectiveness of In-home Automated External Defibrillators for Individuals at Increased Risk of Sudden Cardiac Death

    Full text link
    In-home automated external defibrillators (AEDs) are increasingly recommended as a means for improving survival of cardiac arrests that occur at home. The current study was conducted to explore the relationship between individuals' risk of cardiac arrest and cost-effectiveness of in-home AED deployment. Design : Markov decision model employing a societal perspective. Patients : Four hypothetical cohorts of American adults 60 years of age at progressively greater risk for sudden cardiac death (SCD): 1) all adults (annual probability of SCD 0.4%); 2) adults with multiple SCD risk factors (probability 2%); 3) adults with previous myocardial infarction (probability 4%); and 4) adults with ischemic cardiomyopathy unable to receive an implantable defibrillator (probability 6%). Intervention : Strategy 1: individuals suffering an in-home cardiac arrest were treated with emergency medical services equipped with AEDs (EMS-D). Strategy 2: individuals suffering an in-home cardiac arrest received initial treatment with an in-home AED, followed by EMS. Results : Assuming cardiac arrest survival rates of 15% with EMS-D and 30% with AEDs, the cost per quality-adjusted life-year gained (QALY) of providing in-home AEDs to all adults 60 years of age is 216,000.CostsofprovidinginhomeAEDstoadultswithmultipleriskfactors(2216,000. Costs of providing in-home AEDs to adults with multiple risk factors (2% probability of SCD), previous myocardial infarction (4% probability), and ischemic cardiomyopathy (6% probability) are 132,000, 104,000,and104,000, and 88,000, respectively. Conclusions : The cost-effectiveness of in-home AEDs is intimately linked to individuals' risk of SCD. However, providing in-home AEDs to all adults over age 60 appears relatively expensive.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/72168/1/j.1525-1497.2005.40247.x.pd

    Hospital mortality is associated with ICU admission time

    Get PDF
    Previous studies have shown that patients admitted to the intensive care unit (ICU) after "office hours" are more likely to die. However these results have been challenged by numerous other studies. We therefore analysed this possible relationship between ICU admission time and in-hospital mortality in The Netherlands. This article relates time of ICU admission to hospital mortality for all patients who were included in the Dutch national ICU registry (National Intensive Care Evaluation, NICE) from 2002 to 2008. We defined office hours as 08:00-22:00 hours during weekdays and 09:00-18:00 hours during weekend days. The weekend was defined as from Saturday 00:00 hours until Sunday 24:00 hours. We corrected hospital mortality for illness severity at admission using Acute Physiology and Chronic Health Evaluation II (APACHE II) score, reason for admission, admission type, age and gender. A total of 149,894 patients were included in this analysis. The relative risk (RR) for mortality outside office hours was 1.059 (1.031-1.088). Mortality varied with time but was consistently higher than expected during "off hours" and lower during office hours. There was no significant difference in mortality between different weekdays of Monday to Thursday, but mortality increased slightly on Friday (RR 1.046; 1.001-1.092). During the weekend the RR was 1.103 (1.071-1.136) in comparison with the rest of the week. Hospital mortality in The Netherlands appears to be increased outside office hours and during the weekends, even when corrected for illness severity at admission. However, incomplete adjustment for certain confounders might still play an important role. Further research is needed to fully explain this differenc
    corecore