10 research outputs found

    The Upstream Sources Of Bias: Investigating Theory, Design, And Methods Shaping Adaptive Learning Systems

    Get PDF
    Adaptive systems in education need to ensure population validity to meet the needs of all students for an equitable outcome. Recent research highlights how these systems encode societal biases leading to discriminatory behaviors towards specific student subpopulations. However, the focus has mostly been on investigating bias in predictive modeling, particularly its downstream stages like model development and evaluation. My dissertation work hypothesizes that the upstream sources (i.e., theory, design, training data collection method) in the development of adaptive systems also contribute to the bias in these systems, highlighting the need for a nuanced approach to conducting fairness research. By empirically analyzing student data previously collected from various virtual learning environments, I investigate demographic disparities in three cases representative of the aspects that shape technological advancements in education: 1) non-conformance of data to a widely-accepted theoretical model of emotion, 2) differing implications of technology design on student outcomes, and 3) varying effectiveness of methodological improvements in annotated data collection. In doing so, I challenge implicit assumptions of generalizability in theory, design, and methods and provide an evidence-based commentary on future research and design practices in adaptive and artificially intelligent educational systems surrounding how we consider diversity in our investigations

    Phishing Training: A Preliminary Look at the Effects of Different Types of Training

    Get PDF
    In this paper, we present the preliminary results of an experiment conducted to observe the impact of the different training techniques to increase the likelihood of participants identifying and reporting phishing messages. Three different training approaches were used – general video/quiz training, just-in-time training with simulated phishing emails, and a leaderboard, which awarded users points for forwarding correct phishing messages and penalized them for incorrect ones. The experiment emulated a normal working day of an executive assistant of a manager in an organization. Each participant was expected to accomplish work tasks and respond to work-related emails while watching for and reporting phishing messages. We observed that both general training and the presence of a leaderboard decreased the propensity to click on a phishing message, while we found no effect for different types of just-in-time training

    Predicting Math Success in an Online Tutoring System Using Language Data and Click-Stream Variables: A Longitudinal Analysis

    Get PDF
    Previous studies have demonstrated strong links between students\u27 linguistic knowledge, their affective language patterns and their success in math. Other studies have shown that demographic and click-stream variables in online learning environments are important predictors of math success. This study builds on this research in two ways. First, it combines linguistics and click-stream variables along with demographic information to increase prediction rates for math success. Second, it examines how random variance, as found in repeated participant data, can explain math success beyond linguistic, demographic, and click-stream variables. The findings indicate that linguistic, demographic, and click-stream factors explained about 14% of the variance in math scores. These variables mixed with random factors explained about 44% of the variance

    The Upstream Sources of Bias: Investigating Theory, Design, and Methods Shaping Adaptive Learning Systems

    Get PDF
    Adaptive systems in education need to ensure population validity to meet the needs of all students for an equitable outcome. Recent research highlights how these systems encode societal biases leading to discriminatory behaviors towards specific student subpopulations. However, the focus has mostly been on investigating bias in predictive modeling, particularly its downstream stages like model development and evaluation. My dissertation work hypothesizes that the upstream sources (i.e., theory, design, training data collection method) in the development of adaptive systems also contribute to the bias in these systems, highlighting the need for a nuanced approach to conducting fairness research. By empirically analyzing student data previously collected from various virtual learning environments, I investigate demographic disparities in three cases representative of the aspects that shape technological advancements in education: 1) non-conformance of data to a widely-accepted theoretical model of emotion, 2) differing implications of technology design on student outcomes, and 3) varying effectiveness of methodological improvements in annotated data collection. In doing so, I challenge implicit assumptions of generalizability in theory, design, and methods and provide an evidence-based commentary on future research and design practices in adaptive and artificially intelligent educational systems surrounding how we consider diversity in our investigations

    Instantiation of the DiaCoM framework highlighting multimodal conceptualization and measurement of teacher noticing in human-AI-partnered classrooms

    No full text
    Instantiation of the DiaCoM framework to illustrate the focus of the current empirical study, involving teachers’ diagnostic behaviors, personal characteristics, and situational cues (adapted from Loibl et al., 2020).Investigating analytics-supported teacher noticing using fine-grained data on student-AI and teacher-AI interaction. As represented in the Figure, we study the interplay between situational cues (i.e., students’ in-the-moment struggle when learning the AI system), teachers’ personal characteristics (i.e., prior knowledge of low versus high-performing students), and teachers’ diagnostic behaviors. With the availability of novel data streams to operationalize teacher noticing in the wild, we can also generate new insights into what effects teacher noticing has on student learning.</p

    Instantiation of the DiaCoM framework highlighting multimodal conceptualization and measurement of teacher noticing in human-AI-partnered classrooms.

    No full text
    Instantiation of the DiaCoM framework to illustrate the focus of the current empirical study, involving teachers’ diagnostic behaviors, personal characteristics, and situational cues (adapted from Loibl et al., 2020). Investigating analytics-supported teacher noticing using fine-grained data on student-AI and teacher-AI interaction. As represented in the Figure, we study the interplay between situational cues (i.e., students’ in-the-moment struggle when learning the AI system), teachers’ personal characteristics (i.e., prior knowledge of low versus high-performing students), and teachers’ diagnostic behaviors. With the availability of novel data streams to operationalize teacher noticing in the wild, we can also generate new insights into what effects teacher noticing has on student learning.</p

    Analyzing Learner Affect in a Scenario-Based Intelligent Tutoring System

    No full text
    Scenario-based tutoring systems influence affective states due to two distinct mechanisms during learning: (1) reactions to performance feedback and (2) responses to the scenario context or events. To explore the role of affect and engagement, a scenario-based ITS was instrumented to support unobtrusive facial affect detection. Results from a sample of university students showed relatively few traditional academic affective states such as confusion or frustration, even at decision points and after poor performance (e.g., incorrect responses). This may show evidence of "over-flow," with a high level of engagement and interest but insufficient confusion/disequilibrium for optimal learning

    Engaging with the Scenario: Affect and Facial Patterns from a Scenario-Based Intelligent Tutoring System

    No full text
    Facial expression trackers output measures for facial action units (AUs), and are increasingly being used in learning technologies. In this paper, we compile patterns of AUs seen in related work as well as use factor analysis to search for categories implicit in our corpus. Although there was some overlap between the factors in our data and previous work, we also identified factors seen in the broader literature but not previously reported in the context of learning environments. In a correlational analysis, we found evidence for relationships between factors and self-reported traits such as academic effort, study habits, and interest in the subject. In addition, we saw differences in average levels of factors between a video watching activity, and a decision making activity. However, in this analysis, we were not able to isolate any facial expressions having a significant positive or negative relationship with either learning gain, or performance once question difficulty and related factors were also considered. Given the overall low levels of facial affect in the corpus, further research will explore different populations and learning tasks to test the possible hypothesis that learners may have been in a pattern of “Over-Flow” in which they were engaged with the system, but not deeply thinking about the content or their errors
    corecore