14 research outputs found

    An exposure prevention rating method for intervention needs assessment and effectiveness evaluation

    Full text link
    This article describes a new method for (1) systematically prioritizing needs for intervention on hazardous substance exposures in manufacturing work sites, and (2) evaluating intervention effectiveness. We developed a checklist containing six unique sets of yes/no variables organized in a 2 × 3 matrix of exposure potential versus protection (two columns) at the levels of materials, processes, and human interface (three rows). The three levels correspond to a simplified hierarchy of controls. Each of the six sets of indicator variables was reduced to a high/moderate/low rating. Ratings from the matrix were then combined to generate a single overall exposure prevention rating for each area. Reflecting the hierarchy of controls, material factors were weighted highest, followed by process, and then human interface. The checklist was filled out by an industrial hygienist while conducting a walk-through inspection (N = 131 manufacturing processes/areas in 17 large work sites). One area or process per manufacturing department was assessed and rated. Based on the resulting Exposure Prevention ratings, we concluded that exposures were well controlled in the majority of areas assessed (64% with rating of 1 or 2 on a 6-point scale), that there is some room for improvement in 26 percent of areas (rating of 3 or 4), and that roughly 10 percent of the areas assessed are urgently in need of intervention (rated as 5 or 6). A second hygienist independently assessed a subset of areas to evaluate inter-rater reliability. The reliability of the overall exposure prevention ratings was excellent (weighted kappa = 0.84). The rating scheme has good discriminatory power and reliability and shows promise as a broadly applicable and inexpensive tool for intervention needs assessment and effectiveness evaluation. Validation studies are needed as a next step. This assessment method complements quantitative exposure assessment with an upstream prevention focus

    Is using the strengths and difficulties questionnaire in a community sample the optimal way to assess mental health functioning?

    Get PDF
    An important characteristic of a screening tool is its discriminant ability or the measure’s accuracy to distinguish between those with and without mental health problems. The current study examined the inter-rater agreement and screening concordance of the parent and teacher versions of SDQ at scale, subscale and item-levels, with the view of identifying the items that have the most informant discrepancies; and determining whether the concordance between parent and teacher reports on some items has the potential to influence decision making. Cross-sectional data from parent and teacher reports of the mental health functioning of a community sample of 299 students with and without disabilities from 75 different primary schools in Perth, Western Australia were analysed. The study found that: a) Intraclass correlations between parent and teacher ratings of children’s mental health using the SDQ at person level was fair on individual child level; b) The SDQ only demonstrated clinical utility when there was agreement between teacher and parent reports using the possible or 90% dichotomisation system; and c) Three individual items had positive likelihood ratio scores indicating clinical utility. Of note was the finding that the negative likelihood ratio or likelihood of disregarding the absence of a condition when both parents and teachers rate the item as absent was not significant. Taken together, these findings suggest that the SDQ is not optimised for use in community samples and that further psychometric evaluation of the SDQ in this context is clearly warranted

    Internal Consistency, Test–Retest Reliability and Measurement Error of the Self-Report Version of the Social Skills Rating System in a Sample of Australian Adolescents

    Get PDF
    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test–retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID)
    corecore