18 research outputs found

    Toward the development of structured criteria for interpretation of functional analysis data.

    No full text
    Using functional analysis results to prescribe treatments is the preferred method for developing behavioral interventions. Little is known, however, about the reliability and validity of visual inspection for the interpretation of functional analysis data. The purpose of this investigation was to develop a set of structured criteria for visual inspection of multielement functional analyses that, when applied correctly, would increase interrater agreement and agreement with interpretations reached by expert consensus. In Study 1, 3 predoctoral interns interpreted functional analysis graphs, and interrater agreement was low (M = .46). In Study 2, 64 functional analysis graphs were interpreted by a panel of experts, and then a set of structured criteria were developed that yielded interpretive results similar to those of the panel (exact agreement = .94). In Study 3, the 3 predoctoral interns from Study 1 were trained to use the structured criteria, and the mean interrater agreement coefficient increased to .81. The results suggest that (a) the interpretation of functional analysis data may be less reliable than is generally assumed, (b) decision-making rules used by experts in the interpretation of functional analysis data can be operationalized, and (c) individuals can be trained to apply these rules accurately to increase interrater agreement. Potential uses of the criteria are discussed

    Visual analysis of single-case time series: Effects of variability, serial dependence, and magnitude of intervention effects

    No full text
    Visual analysis is the dominant method of analysis for single-case time series. The literature assumes that visual analysts will be conservative judges. We show that previous research into visual analysis has not adequately examined false alarm and miss rates or the effect of serial dependence. In order to measure false alarm and miss rates while varying serial dependence, amount of random variability, and effect size, 37 students undertaking a postgraduate course in single-case design and analysis were required to assess the presence of an intervention effect in each of 27 AB charts constructed using a first-order autoregressive model. Three levels of effect size and three levels of variability, representative of values found in published charts, were combined with autocorrelation coefficients of 0, 0.3 and 0.6 in a factorial design. False alarm rates were surprisingly high (16% to 84%). Positive autocorrelation and increased random variation both significantly increased the false alarm rates and interacted in a nonlinear fashion. Miss rates were relatively low (0% to 22%) and were not significantly affected by the design parameters. Thus, visual analysts were not conservative, and serial dependence did influence judgment
    corecore