3 research outputs found

    Using Stimulus Fading to Increase the Generality of Multiple Schedule Arrangments

    No full text
    We evaluated two methods to increase the generality of functional communication training (FCT) by incorporating naturally occurring stimuli within a multiple schedule thinning arrangement. In the present study, we used a stimulus control transfer procedure to determine the degree to which discriminated responding can be transferred from arbitrary to naturally occurring stimuli while maintaining high levels of functional communication and low rates of destructive behavior. Following the acquisition of discriminative control in the presence of an arbitrary stimulus, we transferred discriminative properties to naturally occurring activities that signal the unavailability of reinforcement. We compared rates of acquisition of discriminated functional communication responses and rates of destructive behavior using the stimulus control transfer procedure to direct discrimination training of naturally occurring stimuli. Results of the evaluation support the efficacy of both teaching strategies; however, directly teaching discrimination resulted in higher levels of discriminated responding, lower rates of destructive behavior, and fewer sessions to reach mastery criteria relative to stimulus fading

    Detecting aggression in clinical treatment videos

    No full text
    Many clinical spaces are outfitted with centralized video recording systems to monitor patient–client interactions. Considering the increasing interest in video-based machine learning methods, the potential of using these clinical recordings to automate observational data collection is apparent. To explore this, seven patients had videos of their functional assessment and treatment sessions annotated by coders trained by our clinical team. Commonly used clinical software has inherent limitations aligning behavioral and video data, so a custom software tool was employed to address this functionality gap. After developing a Canvas-based coder training course for this tool, a team of six trained coders annotated 82.33 h of data. Two machine learning approaches were considered, where both used a convolutional neural network as a video feature extractor. The first approach used a recurrent network as the classifier on the extracted features and the second used a Transformer architecture. Both models produced promising metrics indicating that the capability of detecting aggression from clinical videos is possible and generalizable. Model performance is directly tied to the feature extractor’s performance on ImageNet, where ConvNeXtXL produced the best performing models. This has applications in automating patient incident response to improve patient and clinician safety and could be directly integrated into existing video management systems for real-time analysis
    corecore