8 research outputs found

    Machine learning to analyze single-case data : a proof of concept

    Get PDF
    Visual analysis is the most commonly used method for interpreting data from singlecase designs, but levels of interrater agreement remain a concern. Although structured aids to visual analysis such as the dual-criteria (DC) method may increase interrater agreement, the accuracy of the analyses may still benefit from improvements. Thus, the purpose of our study was to (a) examine correspondence between visual analysis and models derived from different machine learning algorithms, and (b) compare the accuracy, Type I error rate and power of each of our models with those produced by the DC method. We trained our models on a previously published dataset and then conducted analyses on both nonsimulated and simulated graphs. All our models derived from machine learning algorithms matched the interpretation of the visual analysts more frequently than the DC method. Furthermore, the machine learning algorithms outperformed the DC method on accuracy, Type I error rate, and power. Our results support the somewhat unorthodox proposition that behavior analysts may use machine learning algorithms to supplement their visual analysis of single-case data, but more research is needed to examine the potential benefits and drawbacks of such an approach

    Using AB designs with nonoverlap effect size measures to support clinical decision-making : a Monte Carlo validation

    Full text link
    Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below 0.05 and then examined whether using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice

    Using the prevent-teach-reinforce model to reduce challenging behaviors in children with autism spectrum disorder in home settings : a feasibility study

    Get PDF
    Background. Children with autism spectrum disorders (ASD) often engage in high levels of challenging behaviors, which can be difficult to reduce for parents in home settings. The purpose of our study was to address this issue by examining the effects of adapting the Prevent-Teach-Reinforce model (PTR) to support parents in reducing challenging behaviors in children with ASD in a feasibility study. Method. We conducted a non-blinded randomized trial to compare the effect of the PTR to a business as usual, less intensive intervention (i.e., 3-hr training) on challenging and desirable behaviors (N = 24). Results. The PTR and the 3-hr parental training both reduced challenging behaviors and increased desirable behaviors. Moreover, parents implemented the PTR model with high fidelity and rated it highly for social acceptability. Conclusions. This feasibility study showed that it is possible to compare the PTR with families to a less intensive intervention in a future trial. However, research with a larger sample is essential to determine whether the PTR is more effective than less intensive treatments (e.g., parent training

    A comparison of video-based interventions to Ttach data entry to adults with intellectual disabilities : a replication and extension

    Full text link
    Researchers have demonstrated that video-based interventions are effective at teaching a variety of skills to individuals with intellectual disabilities. To replicate and extend this line of research, we initially planned to compare the effects of video modeling and video prompting on the acquisition of a novel work skill (i.e., data entry) in two adults with moderate intellectual disabilities using an alternating treatment design. When both interventions failed to improve performance, the instructors sequentially introduced a least-to-most instructor-delivered prompting procedure. The results indicated that the introduction of instructor prompts considerably increased correct responding in one participant during video modeling and in both participants during video prompting. Overall, the study suggests that practitioners should consider incorporating instructor-delivered prompts from the onset, or at least when no improvements in performance are observed, when using video-based interventions to teach new work skills to individuals with intellectual disabilities

    Concurrent validity of open-ended functional assessment interviews with functional analysis

    Full text link
    Open-Ended Functional Assessment Interviews have limited empirical support for their concurrent validity with functional analysis. To address this issue, we conducted a study wherein 176 independent behavior analysts relied on data collected using Open-Ended Functional Assessment Interviews to identify the function of challenging behavior in four children with autism. Then, we compared the results of their analyses with those of a traditional functional analysis. Our results showed that the conclusions drawn by behavior analysts using the Open-Ended Functional Assessment Interviews corresponded with the outcomes of functional analyses in 74% of cases. These findings suggest that the Open-Ended Functional Assessment Interview may inform functional analyses to develop initial hypotheses
    corecore