1,154 research outputs found

    The Resident Assessment Instrument-Minimum Data Set 2.0 quality indicators: a systematic review

    Get PDF
    BackgroundThe Resident Assessment Instrument-Minimum Data Set (RAI-MDS) 2.0 is designed to collect the minimum amount of data to guide care planning and monitoring for residents in long-term care settings. These data have been used to compute indicators of care quality. Use of the quality indicators to inform quality improvement initiatives is contingent upon the validity and reliability of the indicators. The purpose of this review was to systematically examine published and grey research reports in order to assess the state of the science regarding the validity and reliability of the RAI-MDS 2.0 Quality Indicators (QIs).MethodsWe systematically reviewed the evidence for the validity and reliability of the RAI-MDS 2.0 QIs. A comprehensive literature search identified relevant original research published, in English, prior to December 2008. Fourteen articles and one report examining the validity and/or reliability of the RAI-MDS 2.0 QIs were included.ResultsThe studies fell into two broad categories, those that examined individual quality indicators and those that examined multiple indicators. All studies were conducted in the United States and included from one to a total of 209 facilities. The number of residents included in the studies ranged from 109 to 5758. One study conducted under research conditions examined 38 chronic care QIs, of which strong evidence for the validity of 12 of the QIs was found. In response to these findings, the 12 QIs were recommended for public reporting purposes. However, a number of observational studies (n=13), conducted in &quot;real world&quot; conditions, have tested the validity and/or reliability of individual QIs, with mixed results. Ten QIs have been studied in this manner, including falls, depression, depression without treatment, urinary incontinence, urinary tract infections, weight loss, bedfast, restraint, pressure ulcer, and pain. These studies have revealed the potential for systematic bias in reporting, with under-reporting of some indicators and over-reporting of others.ConclusionEvidence for the reliability and validity of the RAI-MDS QIs remains inconclusive. The QIs provide a useful tool for quality monitoring and to inform quality improvement programs and initiatives. However, caution should be exercised when interpreting the QI results and other sources of evidence of the quality of care processes should be considered in conjunction with QI results.<br /

    CR1 Knops blood group alleles are not associated with severe malaria in the Gambia

    Get PDF
    The Knops blood group antigen erythrocyte polymorphisms have been associated with reduced falciparum malaria-based in vitro rosette formation (putative malaria virulence factor). Having previously identified single-nucleotide polymorphisms (SNPs) in the human complement receptor 1 (CR1/CD35) gene underlying the Knops antithetical antigens Sl1/Sl2 and McC(a)/McC(b), we have now performed genotype comparisons to test associations between these two molecular variants and severe malaria in West African children living in the Gambia. While SNPs associated with Sl:2 and McC(b+) were equally distributed among malaria-infected children with severe malaria and control children not infected with malaria parasites, high allele frequencies for Sl 2 (0.800, 1,365/1,706) and McC(b) (0.385, 658/1706) were observed. Further, when compared to the Sl 1/McC(a) allele observed in all populations, the African Sl 2/McC(b) allele appears to have evolved as a result of positive selection (modified Nei-Gojobori test Ka-Ks/s.e.=1.77, P-value &lt;0.05). Given the role of CR1 in host defense, our findings suggest that Sl 2 and McC(b) have arisen to confer a selective advantage against infectious disease that, in view of these case-control study data, was not solely Plasmodium falciparum malaria. Factors underlying the lack of association between Sl 2 and McC(b) with severe malaria may involve variation in CR1 expression levels

    Azacitidine prolongs overall survival and reduces infections and hospitalizations in patients with WHO-defined acute myeloid leukaemia compared with conventional care regimens: an update

    Get PDF
    Azacitidine (AZA), as demonstrated in the phase III trial (AZA-001), is the first MDS treatment to significantly prolong overall survival (OS) in higher risk MDS pts ((2007) Blood 110 817). Approximately, one-third of the patients (pts) enrolled in AZA-001 were FAB RAEB-T (≥20–30% blasts) and now meet the WHO criteria for acute myeloid leukaemia (AML) ((1999) Blood 17 3835). Considering the poor prognosis (median survival <1 year) and the poor response to chemotherapy in these pts, this sub-group analysis evaluated the effects of AZA versus conventional care regimens (CCR) on OS and on response rates in pts with WHO AML

    Efficacy, safety and tolerability of escitalopram in doses up to 50 mg in Major Depressive Disorder (MDD): an open-label, pilot study

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Escitalopram is licensed for use at doses up to 20 mg but is used clinically at higher doses. There is limited published data at higher doses and none in the treatment of Major Depressive Disorder (MDD).</p> <p>Methods</p> <p>This open-label, pilot study was designed to investigate the efficacy, safety and tolerability of escitalopram in doses up to 50 mg in MDD. It was conducted in 60 primary care patients with MDD who had not responded to adequate treatment with citalopram. Patients were treated with escalating doses of escitalopram up to 50 mg for up to 32 weeks until they achieved remission (Montgomery-Asberg Depression Rating Scale [MADRS] ≤8) or failed to tolerate the dose.</p> <p>Results</p> <p>Forty-two patients (70%) completed the study. Twenty-one patients (35%) achieved remission with 8 of the 21 patients (38%) needing the 50 mg dose to achieve remission. Median time to remission was 24 weeks and median dose in remission was 30 mg. No significant safety issues were identified although tolerability appeared to decline above a dose of 40 mg with 26% of patients unable to tolerate 50 mg. Twelve (20%) patients had adverse events leading to discontinuation. The most common adverse events were headache (35%), nausea, diarrhoea and nasopharyngitis (all 25%). Minor mean weight gain was found during the study, which did not appear to be dose-related. Half of the patients who completed the study chose to continue treatment with escitalopram rather than taper down the dose at 32 weeks.</p> <p>Conclusions</p> <p>Dose escalation with escitalopram above 20 mg may have a useful role in the management of patients with MDD, although further studies are needed to confirm this finding.</p> <p>Trial Registration</p> <p>ClinicalTrials.gov: <a href="http://www.clinicaltrials.gov/ct2/show/NCT00785434">NCT00785434</a></p

    CD28/B7-Mediated Co-stimulation Is Critical for Early Control of Murine Cytomegalovirus Infection

    Full text link
    Abstract Control of acute murine cytomegalovirus (MCMV) infection is dependent upon both innate and adaptive immune responses, relying primarily upon natural killer (NK) and T-cell responses for control. Although CD28/B7 plays a clear role in T-cell responses in many antigen systems including some viral infections, the importance of co-stimulation during MCMV infection is unconfirmed. In addition, recent data suggest that CD28/B7 co-stimulation might also be important to Ly49H+ NK-cell expansion. We therefore hypothesized that CD28/B7 co-stimulation is critical to viral control after MCMV infection, and further that CD28/B7 co-stimulation plays a role in MCMV-specific T- and NK-cell responses. To test these hypotheses, we utilized C57BL/6 mice lacking the co-stimulatory molecules B7-1 and B7-2 or CD28. After primary infection with MCMV, viral titers are significantly elevated in mice lacking CD28 or B7 compared with wild-type mice. Impaired viral control is associated with significant defects in peripheral T-cell responses to MCMV, which appear to be dependent upon CD28/B7 co-stimulation. Abnormal hepatic T-cell responses in CD28/ mice are preceded by impaired MCMV-specific Ly49H+ NK-cell responses. Cytokine evaluations confirm that CD28/B7 co-stimulation is not required for non-specific antiviral responses. We conclude that CD28-mediated co-stimulation is critical for early viral control during acute MCMV infection.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/78134/1/vim.2008.0080.pd

    External Evaluation of a Gentamicin Infant Population Pharmacokinetic Model Using Data from a National Electronic Health Record Database

    Get PDF
    Gentamicin is a common antibiotic used in neonates and infants. A recently published population pharmacokinetic (PK) model was developed using data from multiple studies, and the objective of our analyses is to evaluate the feasibility of using a national electronic health record (EHR) database to further externally evaluate this model. Our results suggest that with proper data capture procedures, EHR data can serve as a potential data source for external evaluation of PK models

    Prognostic value of strain by feature-tracking cardiac magnetic resonance in arrhythmogenic right ventricular cardiomyopathy

    Get PDF
    AIMS: Arrhythmogenic right ventricular cardiomyopathy (ARVC) is characterized by ventricular dysfunction and ventricular arrhythmias (VA). Adequate arrhythmic risk assessment is important to prevent sudden cardiac death. We aimed to study the incremental value of strain by feature-tracking cardiac magnetic resonance imaging (FT-CMR) in predicting sustained VA in ARVC patients. METHODS AND RESULTS: CMR images of 132 ARVC patients (43% male, 40.6 ± 16.0 years) without prior VA were analysed for global and regional right and left ventricular (RV, LV) strain. Primary outcome was sustained VA during follow-up. We performed multivariable regression assessing strain, in combination with (i) RV ejection fraction (EF); (ii) LVEF; and (iii) the ARVC risk calculator. False discovery rate adjusted P-values were given to correct for multiple comparisons and c-statistics were calculated for each model. During 4.3 (2.0-7.9) years of follow-up, 19% of patients experienced sustained VA. Compared to patients without VA, those with VA had significantly reduced RV longitudinal (P ≤ 0.03) and LV circumferential (P ≤ 0.04) strain. In addition, patients with VA had significantly reduced biventricular EF (P ≤ 0.02). After correcting for RVEF, LVEF, and the ARVC risk calculator separately in multivariable analysis, both RV and LV strain lost their significance [hazard ratio 1.03-1.18, P > 0.05]. Likewise, while strain improved the c-statistic in combination with RVEF, LVEF, and the ARVC risk calculator separately, this did not reach statistical significance (P ≥ 0.18). CONCLUSION: Both RV longitudinal and LV circumferential strain are reduced in ARVC patients with sustained VA during follow-up. However, strain does not have incremental value over RVEF, LVEF, and the ARVC VA risk calculator

    A predictive model for the early identification of patients at risk for a prolonged intensive care unit length of stay

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Patients with a prolonged intensive care unit (ICU) length of stay account for a disproportionate amount of resource use. Early identification of patients at risk for a prolonged length of stay can lead to quality enhancements that reduce ICU stay. This study developed and validated a model that identifies patients at risk for a prolonged ICU stay.</p> <p>Methods</p> <p>We performed a retrospective cohort study of 343,555 admissions to 83 ICUs in 31 U.S. hospitals from 2002-2007. We examined the distribution of ICU length of stay to identify a threshold where clinicians might be concerned about a prolonged stay; this resulted in choosing a 5-day cut-point. From patients remaining in the ICU on day 5 we developed a multivariable regression model that predicted remaining ICU stay. Predictor variables included information gathered at admission, day 1, and ICU day 5. Data from 12,640 admissions during 2002-2005 were used to develop the model, and the remaining 12,904 admissions to internally validate the model. Finally, we used data on 11,903 admissions during 2006-2007 to externally validate the model.</p> <p>Results</p> <p>The variables that had the greatest impact on remaining ICU length of stay were those measured on day 5, not at admission or during day 1. Mechanical ventilation, PaO<sub>2</sub>: FiO<sub>2 </sub>ratio, other physiologic components, and sedation on day 5 accounted for 81.6% of the variation in predicted remaining ICU stay. In the external validation set observed ICU stay was 11.99 days and predicted total ICU stay (5 days + day 5 predicted remaining stay) was 11.62 days, a difference of 8.7 hours. For the same patients, the difference between mean observed and mean predicted ICU stay using the APACHE day 1 model was 149.3 hours. The new model's r<sup>2 </sup>was 20.2% across individuals and 44.3% across units.</p> <p>Conclusions</p> <p>A model that uses patient data from ICU days 1 and 5 accurately predicts a prolonged ICU stay. These predictions are more accurate than those based on ICU day 1 data alone. The model can be used to benchmark ICU performance and to alert physicians to explore care alternatives aimed at reducing ICU stay.</p

    Temporal and Geographic variation in the validity and internal consistency of the Nursing Home Resident Assessment Minimum Data Set 2.0

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Minimum Data Set (MDS) for nursing home resident assessment has been required in all U.S. nursing homes since 1990 and has been universally computerized since 1998. Initially intended to structure clinical care planning, uses of the MDS expanded to include policy applications such as case-mix reimbursement, quality monitoring and research. The purpose of this paper is to summarize a series of analyses examining the internal consistency and predictive validity of the MDS data as used in the "real world" in all U.S. nursing homes between 1999 and 2007.</p> <p>Methods</p> <p>We used person level linked MDS and Medicare denominator and all institutional claim files including inpatient (hospital and skilled nursing facilities) for all Medicare fee-for-service beneficiaries entering U.S. nursing homes during the period 1999 to 2007. We calculated the sensitivity and positive predictive value (PPV) of diagnoses taken from Medicare hospital claims and from the MDS among all new admissions from hospitals to nursing homes and the internal consistency (alpha reliability) of pairs of items within the MDS that logically should be related. We also tested the internal consistency of commonly used MDS based multi-item scales and examined the predictive validity of an MDS based severity measure viz. one year survival. Finally, we examined the correspondence of the MDS discharge record to hospitalizations and deaths seen in Medicare claims, and the completeness of MDS assessments upon skilled nursing facility (SNF) admission.</p> <p>Results</p> <p>Each year there were some 800,000 new admissions directly from hospital to US nursing homes and some 900,000 uninterrupted SNF stays. Comparing Medicare enrollment records and claims with MDS records revealed reasonably good correspondence that improved over time (by 2006 only 3% of deaths had no MDS discharge record, only 5% of SNF stays had no MDS, but over 20% of MDS discharges indicating hospitalization had no associated Medicare claim). The PPV and sensitivity levels of Medicare hospital diagnoses and MDS based diagnoses were between .6 and .7 for major diagnoses like CHF, hypertension, diabetes. Internal consistency, as measured by PPV, of the MDS ADL items with other MDS items measuring impairments and symptoms exceeded .9. The Activities of Daily Living (ADL) long form summary scale achieved an alpha inter-consistency level exceeding .85 and multi-item scale alpha levels of .65 were achieved for well being and mood, and .55 for behavior, levels that were sustained even after stratification by ADL and cognition. The Changes in Health, End-stage disease and Symptoms and Signs (CHESS) index, a summary measure of frailty was highly predictive of one year survival.</p> <p>Conclusion</p> <p>The MDS demonstrates a reasonable level of consistency both in terms of how well MDS diagnoses correspond to hospital discharge diagnoses and in terms of the internal consistency of functioning and behavioral items. The level of alpha reliability and validity demonstrated by the scales suggest that the data can be useful for research and policy analysis. However, while improving, the MDS discharge tracking record should still not be used to indicate Medicare hospitalizations or mortality. It will be important to monitor the performance of the MDS 3.0 with respect to consistency, reliability and validity now that it has replaced version 2.0, using these results as a baseline that should be exceeded.</p

    Validation of the Chinese version of the "Mood Disorder Questionnaire" for screening bipolar disorder among patients with a current depressive episode

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The Mood Disorder Questionnaire (MDQ) is a well-recognized screening tool for bipolar disorder, but its Chinese version needs further validation. This study aims to measure the accuracy of the Chinese version of the MDQ as a screening instrument for bipolar disorder (BPD) in a group of patients with a current major depressive episode.</p> <p>Methods</p> <p>142 consecutive patients with an initial DSM-IV-TR diagnosis of a major depressive episode were screened for BPD using the Chinese translation of the MDQ and followed up for one year. The final diagnosis, determined by a special committee consisting of three trained senior psychiatrists, was used as a 'gold standard' and ROC was plotted to evaluate the performance of the MDQ. The optimal cut-off was chosen by maximizing the Younden's index.</p> <p>Results</p> <p>Of the 142 patients, 122 (85.9%) finished the one year follow-up. On the basis of a semi-structured clinical interview 48.4% (59/122) received a diagnosis of unipolar depression (UPD), 36.9% (45/122) BPDII and 14.8% (18/122) BPDI. At the end of the one year follow-up,9 moved from UPD to BPD, 2 from BPDII to UPD, 1 from BPDII to BPDI, the overall rate of initial misdiagnosis was 16.4%. MDQ showed a good accuracy for BPD: the optimal cut-off was 4, with a sensitivity of 0.72 and a specificity of 0.73. When BPDII and BPDI were calculated independently, the optimal cut-off for BPDII was 4, with a sensitivity of 0.70 and a specificity of 0.73; while the optimal cut-off for BPDI was 5, with a sensitivity of 0.67 and a specificity of 0.86.</p> <p>Conclusions</p> <p>Our results show that the Chinese version of MDQ is a valid tool for screening BPD in a group of patients with current depressive episode on the Chinese mainland.</p
    corecore