11 research outputs found
Inadequate prenatal care and its association with adverse pregnancy outcomes: A comparison of indices
<p>Abstract</p> <p>Background</p> <p>The objectives of this study were to determine rates of prenatal care utilization in Winnipeg, Manitoba, Canada from 1991 to 2000; to compare two indices of prenatal care utilization in identifying the proportion of the population receiving inadequate prenatal care; to determine the association between inadequate prenatal care and adverse pregnancy outcomes (preterm birth, low birth weight [LBW], and small-for-gestational age [SGA]), using each of the indices; and, to assess whether or not, and to what extent, gestational age modifies this association.</p> <p>Methods</p> <p>We conducted a population-based study of women having a hospital-based singleton live birth from 1991 to 2000 (N = 80,989). Data sources consisted of a linked mother-baby database and a physician claims file maintained by Manitoba Health. Rates of inadequate prenatal care were calculated using two indices, the R-GINDEX and the APNCU. Logistic regression analysis was used to determine the association between inadequate prenatal care and adverse pregnancy outcomes. Stratified analysis was then used to determine whether the association between inadequate prenatal care and LBW or SGA differed by gestational age.</p> <p>Results</p> <p>Rates of inadequate/no prenatal care ranged from 8.3% using APNCU to 8.9% using R-GINDEX. The association between inadequate prenatal care and preterm birth and LBW varied depending on the index used, with adjusted odds ratios (AOR) ranging from 1.0 to 1.3. In contrast, both indices revealed the same strength of association of inadequate prenatal care with SGA (AOR 1.4). Both indices demonstrated heterogeneity (non-uniformity) across gestational age strata, indicating the presence of effect modification by gestational age.</p> <p>Conclusion</p> <p>Selection of a prenatal care utilization index requires careful consideration of its methodological underpinnings and limitations. The two indices compared in this study revealed different patterns of utilization of prenatal care, and should not be used interchangeably. Use of these indices to study the association between utilization of prenatal care and pregnancy outcomes affected by the duration of pregnancy should be approached cautiously.</p
Children's vomiting following posterior fossa surgery: A retrospective study
<p>Abstract</p> <p>Background</p> <p>Nausea and vomiting is a problem for children after neurosurgery and those requiring posterior fossa procedures appear to have a high incidence. This clinical observation has not been quantified nor have risk factors unique to this group of children been elucidated.</p> <p>Methods</p> <p>A six year retrospective chart audit at two Canadian children's hospitals was conducted. The incidence of nausea and vomiting was extracted. Hierarchical multivariable logistic regression was used to quantify risk and protective factors at 120 hours after surgery and early vs. late vomiting.</p> <p>Results</p> <p>The incidence of vomiting over a ten day postoperative period was 76.7%. Documented vomiting ranged from single events to greater than 20 over the same period. In the final multivariable model: adolescents (age 12 to <17) were less likely to vomit by 120 hours after surgery than other age groups; those who received desflurane, when compared to all other volatile anesthetics, were more likely to vomit, yet the use of ondansetron with desflurane decre kelihood. Children who had intraoperative ondansetron were more likely to vomit in the final multivariable model (perhaps because of its use, in the clinical judgment of the anesthesiologist, for children considered at risk). Children who started vomiting in the first 24 hours were more likely to be school age (groups 4 to <7 and 7 to <12) and receive desflurane. Nausea was not well documented and was therefore not analyzed.</p> <p>Conclusion</p> <p>The incidence of vomiting in children after posterior fossa surgery is sufficient to consider all children requiring these procedures to be at high risk for POV. Nausea requires better assessment and documentation.</p
Validation of the conceptual research utilization scale: an application of the standards for educational and psychological testing in healthcare
<p>Abstract</p> <p>Background</p> <p>There is a lack of acceptable, reliable, and valid survey instruments to measure conceptual research utilization (CRU). In this study, we investigated the psychometric properties of a newly developed scale (the CRU Scale).</p> <p>Methods</p> <p>We used the <it>Standards for Educational and Psychological Testing </it>as a validation framework to assess four sources of validity evidence: content, response processes, internal structure, and relations to other variables. A panel of nine international research utilization experts performed a formal content validity assessment. To determine response process validity, we conducted a series of one-on-one scale administration sessions with 10 healthcare aides. Internal structure and relations to other variables validity was examined using CRU Scale response data from a sample of 707 healthcare aides working in 30 urban Canadian nursing homes. Principal components analysis and confirmatory factor analyses were conducted to determine internal structure. Relations to other variables were examined using: (1) bivariate correlations; (2) change in mean values of CRU with increasing levels of other kinds of research utilization; and (3) multivariate linear regression.</p> <p>Results</p> <p>Content validity index scores for the five items ranged from 0.55 to 1.00. The principal components analysis predicted a 5-item 1-factor model. This was inconsistent with the findings from the confirmatory factor analysis, which showed best fit for a 4-item 1-factor model. Bivariate associations between CRU and other kinds of research utilization were statistically significant (p < 0.01) for the latent CRU scale score and all five CRU items. The CRU scale score was also shown to be significant predictor of overall research utilization in multivariate linear regression.</p> <p>Conclusions</p> <p>The CRU scale showed acceptable initial psychometric properties with respect to responses from healthcare aides in nursing homes. Based on our validity, reliability, and acceptability analyses, we recommend using a reduced (four-item) version of the CRU scale to yield sound assessments of CRU by healthcare aides. Refinement to the wording of one item is also needed. Planned future research will include: latent scale scoring, identification of variables that predict and are outcomes to conceptual research use, and longitudinal work to determine CRU Scale sensitivity to change.</p
A systematic review of the psychometric properties of self-report research utilization measures used in healthcare
<p>Abstract</p> <p>Background</p> <p>In healthcare, a gap exists between what is known from research and what is practiced. Understanding this gap depends upon our ability to robustly measure research utilization.</p> <p>Objectives</p> <p>The objectives of this systematic review were: to identify self-report measures of research utilization used in healthcare, and to assess the psychometric properties (acceptability, reliability, and validity) of these measures.</p> <p>Methods</p> <p>We conducted a systematic review of literature reporting use or development of self-report research utilization measures. Our search included: multiple databases, ancestry searches, and a hand search. Acceptability was assessed by examining time to complete the measure and missing data rates. Our approach to reliability and validity assessment followed that outlined in the <it>Standards for Educational and Psychological Testing</it>.</p> <p>Results</p> <p>Of 42,770 titles screened, 97 original studies (108 articles) were included in this review. The 97 studies reported on the use or development of 60 unique self-report research utilization measures. Seven of the measures were assessed in more than one study. Study samples consisted of healthcare providers (92 studies) and healthcare decision makers (5 studies). No studies reported data on acceptability of the measures. Reliability was reported in 32 (33%) of the studies, representing 13 of the 60 measures. Internal consistency (Cronbach's Alpha) reliability was reported in 31 studies; values exceeded 0.70 in 29 studies. Test-retest reliability was reported in 3 studies with Pearson's <it>r </it>coefficients > 0.80. No validity information was reported for 12 of the 60 measures. The remaining 48 measures were classified into a three-level validity hierarchy according to the number of validity sources reported in 50% or more of the studies using the measure. Level one measures (n = 6) reported evidence from any three (out of four possible) <it>Standards </it>validity sources (which, in the case of single item measures, was all applicable validity sources). Level two measures (n = 16) had evidence from any two validity sources, and level three measures (n = 26) from only one validity source.</p> <p>Conclusions</p> <p>This review reveals significant underdevelopment in the measurement of research utilization. Substantial methodological advances with respect to construct clarity, use of research utilization and related theory, use of measurement theory, and psychometric assessment are required. Also needed are improved reporting practices and the adoption of a more contemporary view of validity (<it>i.e.</it>, the <it>Standards</it>) in future research utilization measurement studies.</p