9 research outputs found

    Pedagogical agents: Influences of artificially generated instructor personas on taking chances

    Get PDF
    Educational institutes are currently facing the new normality that an ongoing pandemic situation has brought to teaching and learning. Distributed learning with content that blends over several platforms and locations needs to be created with didactic expertise in a feasible manner. At the same time, the possibilities for creating and distributing digital content have developed rapidly. Advanced computing supports the creation of artificial images, natural speech, and even natural-looking but non-existent persons. Since such generative content is often also published under a Creative Commons license, it presents as viable option for designing learning content, assignments, or instructions for tasks. However, there is still limited evidence on how, for example, generated pedagogical agents (tutors) influence behaviour and decisions. This study investigated the influences of artificially generated tutor personas in a decision-making task distributed internationally on the Google Play store. The field experiment extended the balloon analogue risk task (BART) with instructions from generated persona photographs to evaluate potential influences on risk-taking behaviour. In a between-subject design, either a female tutor, a male tutor, or no tutor picture at all was presented during the task. The results (N=74) show a higher risk propensity when displaying a male artificial instructor compared to a female instructor. Participants also proceed with greater caution when instructed by a female tutor as they reflect longer before initiating the next step to pump up the balloon. Further lines of research and experiences from the distribution of an investigative instruction app on Google Play are summarised in the conclusive implications

    Studying with the Help of Digital Tutors: Design Aspects of Conversational Agents that Influence the Learning Process

    Get PDF
    Conversational agents such as Apple’s Siri or Amazon’s Alexa are becoming more and more prevalent. Almost every smart device comes equipped with such an agent. While on the one hand they can make menial everyday tasks a lot easier for people, there are also more sophisticated use cases in which conversational agents can be helpful. One of these use cases is tutoring in higher education. Several systems to support both formal and informal learning have been developed. There have been many studies about single characteristics of pedagogical conversational agents and how these influence learning outcomes. But what is still missing, is an overview and guideline for atomic design decisions that need to be taken into account when creating such a system. Based on a review of articles on pedagogical conversational agents, this paper provides an extension of existing classifications of characteristics as to include more fine-grained design aspects

    Designing a Chatbot Social Cue Configuration System

    Get PDF
    Social cues (e.g., gender, age) are important design features of chatbots. However, choosing a social cue design is challenging. Although much research has empirically investigated social cues, chatbot engineers have difficulties to access this knowledge. Descriptive knowledge is usually embedded in research articles and difficult to apply as prescriptive knowledge. To address this challenge, we propose a chatbot social cue configuration system that supports chatbot engineers to access descriptive knowledge in order to make justified social cue design decisions (i.e., grounded in empirical research). We derive two design principles that describe how to extract and transform descriptive knowledge into a prescriptive and machine-executable representation. In addition, we evaluate the prototypical instantiations in an exploratory focus group and at two practitioner symposia. Our research addresses a contemporary problem and contributes with a generalizable concept to support researchers as well as practitioners to leverage existing descriptive knowledge in the design of artifacts

    Improving Remedial Middle School Standardized Test Scores

    Get PDF
    The purpose of this applied study was to solve the problem of low standardized test scores in a remedial class for a middle school in southern Virginia and to formulate a solution to address the problem. The central research question that data collection attempted to answer was: How can the problem of low standardized test scores in a remedial math class be solved in a middle school in southern Virginia? Data were collected in three ways. First, interviews of teachers and administrators of the remedial math class, called Math Lab, were conducted. These interviews were transcribed and coded, with the codes collected into themes and then displayed visually. Second, an online discussion board was conducted with current and former teachers of Math Lab, school administrators, and classroom math teachers. Third, surveys of teachers and administrators with knowledge of Math Lab and how it impacted students were completed. The quantitative surveys were analyzed by finding descriptive statistics of the data. After reviewing all data sources, a solution to address the problem was created that included designing a curriculum for Math Lab, requiring communication between Math Lab teachers and general classroom math teachers, and professional development of the Math Lab teacher about teaching remedial classes

    Unraveling the Double-Bind: An Investigation of Black and Latina Women in STEM

    Full text link
    Civil rights activist Robert P. Moses was a driving force in defining equitable dissemination of quality science, technology, engineering, and math (STEM) education as an act of social justice. My work borrows this frame to highlight access to STEM education as a civil rights issue and to emphasize the importance of taking a social justice approach to interventions for those who experience intersecting systems of oppression (i.e., Black and Latina women), and for whom previous intervention efforts have not adequately addressed. Ameliorating racial and gender disparities through fostering psychological safety (e.g., belonging) in STEM fields has been a substantive focus for intervention research. However, these interventions have overwhelmingly focused on 1) a single-axis perspective of fostering psychological safety (i.e., only focusing on either students’ race or gender) and 2) shifting students’ attitudes and behavior individually. Through 2 experimental online studies, I provide evidence for the importance of leveraging instructors as a point of intervention to increase psychological safety for Black and Latina women in STEM. The first study demonstrates that differing levels of (un)shared social identities directly work to influence psychological safety for Black and Latina women in STEM contexts which, in turn, shape their educational decision making. Additionally, this study found strong evidence of ethnic prominence: Black and Latina women reported maximal psychological safety from and higher intentions to enroll in racial ingroup professors’ classes. Study 2 investigates the utility of teaching philosophies as a subtle intervention to increase psychological safety of outgroup STEM instructors for Black and Latina women. This study found that belonging-based teaching philosophies (i.e., belonging and belonging + social justice) resulted in higher perceptions of advocacy, safety, and intention to enroll regardless of participant race. The effect of the social justice teaching philosophy on these perceptions varied as a function of participant race. Overall, these studies emphasize the importance of taking an intersectional approach to social psychological research, especially for intervention work. Additionally, this work offers theoretical and applied implications for educational interventions aimed at achieving parity in STEM domains with a particular focus on the efficacy of imbuing STEM contexts with social justice narratives

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Building Embodied Conversational Agents:Observations on human nonverbal behaviour as a resource for the development of artificial characters

    Get PDF
    "Wow this is so cool!" This is what I most probably yelled, back in the 90s, when my first computer program on our MSX computer turned out to do exactly what I wanted it to do. The program contained the following instruction: COLOR 10(1.1) After hitting enter, it would change the screen color from light blue to dark yellow. A few years after that experience, Microsoft Windows was introduced. Windows came with an intuitive graphical user interface that was designed to allow all people, so also those who would not consider themselves to be experienced computer addicts, to interact with the computer. This was a major step forward in human-computer interaction, as from that point forward no complex programming skills were required anymore to perform such actions as adapting the screen color. Changing the background was just a matter of pointing the mouse to the desired color on a color palette. "Wow this is so cool!". This is what I shouted, again, 20 years later. This time my new smartphone successfully skipped to the next song on Spotify because I literally told my smartphone, with my voice, to do so. Being able to operate your smartphone with natural language through voice-control can be extremely handy, for instance when listening to music while showering. Again, the option to handle a computer with voice instructions turned out to be a significant optimization in human-computer interaction. From now on, computers could be instructed without the use of a screen, mouse or keyboard, and instead could operate successfully simply by telling the machine what to do. In other words, I have personally witnessed how, within only a few decades, the way people interact with computers has changed drastically, starting as a rather technical and abstract enterprise to becoming something that was both natural and intuitive, and did not require any advanced computer background. Accordingly, while computers used to be machines that could only be operated by technically-oriented individuals, they had gradually changed into devices that are part of many people’s household, just as much as a television, a vacuum cleaner or a microwave oven. The introduction of voice control is a significant feature of the newer generation of interfaces in the sense that these have become more "antropomorphic" and try to mimic the way people interact in daily life, where indeed the voice is a universally used device that humans exploit in their exchanges with others. The question then arises whether it would be possible to go even one step further, where people, like in science-fiction movies, interact with avatars or humanoid robots, whereby users can have a proper conversation with a computer-simulated human that is indistinguishable from a real human. An interaction with a human-like representation of a computer that behaves, talks and reacts like a real person would imply that the computer is able to not only produce and understand messages transmitted auditorily through the voice, but also could rely on the perception and generation of different forms of body language, such as facial expressions, gestures or body posture. At the time of writing, developments of this next step in human-computer interaction are in full swing, but the type of such interactions is still rather constrained when compared to the way humans have their exchanges with other humans. It is interesting to reflect on how such future humanmachine interactions may look like. When we consider other products that have been created in history, it sometimes is striking to see that some of these have been inspired by things that can be observed in our environment, yet at the same do not have to be exact copies of those phenomena. For instance, an airplane has wings just as birds, yet the wings of an airplane do not make those typical movements a bird would produce to fly. Moreover, an airplane has wheels, whereas a bird has legs. At the same time, an airplane has made it possible for a humans to cover long distances in a fast and smooth manner in a way that was unthinkable before it was invented. The example of the airplane shows how new technologies can have "unnatural" properties, but can nonetheless be very beneficial and impactful for human beings. This dissertation centers on this practical question of how virtual humans can be programmed to act more human-like. The four studies presented in this dissertation all have the equivalent underlying question of how parts of human behavior can be captured, such that computers can use it to become more human-like. Each study differs in method, perspective and specific questions, but they are all aimed to gain insights and directions that would help further push the computer developments of human-like behavior and investigate (the simulation of) human conversational behavior. The rest of this introductory chapter gives a general overview of virtual humans (also known as embodied conversational agents), their potential uses and the engineering challenges, followed by an overview of the four studies

    Producing Acoustic-Prosodic Entrainment in a Robotic Learning Companion to Build Learner Rapport

    Get PDF
    abstract: With advances in automatic speech recognition, spoken dialogue systems are assuming increasingly social roles. There is a growing need for these systems to be socially responsive, capable of building rapport with users. In human-human interactions, rapport is critical to patient-doctor communication, conflict resolution, educational interactions, and social engagement. Rapport between people promotes successful collaboration, motivation, and task success. Dialogue systems which can build rapport with their user may produce similar effects, personalizing interactions to create better outcomes. This dissertation focuses on how dialogue systems can build rapport utilizing acoustic-prosodic entrainment. Acoustic-prosodic entrainment occurs when individuals adapt their acoustic-prosodic features of speech, such as tone of voice or loudness, to one another over the course of a conversation. Correlated with liking and task success, a dialogue system which entrains may enhance rapport. Entrainment, however, is very challenging to model. People entrain on different features in many ways and how to design entrainment to build rapport is unclear. The first goal of this dissertation is to explore how acoustic-prosodic entrainment can be modeled to build rapport. Towards this goal, this work presents a series of studies comparing, evaluating, and iterating on the design of entrainment, motivated and informed by human-human dialogue. These models of entrainment are implemented in the dialogue system of a robotic learning companion. Learning companions are educational agents that engage students socially to increase motivation and facilitate learning. As a learning companion’s ability to be socially responsive increases, so do vital learning outcomes. A second goal of this dissertation is to explore the effects of entrainment on concrete outcomes such as learning in interactions with robotic learning companions. This dissertation results in contributions both technical and theoretical. Technical contributions include a robust and modular dialogue system capable of producing prosodic entrainment and other socially-responsive behavior. One of the first systems of its kind, the results demonstrate that an entraining, social learning companion can positively build rapport and increase learning. This dissertation provides support for exploring phenomena like entrainment to enhance factors such as rapport and learning and provides a platform with which to explore these phenomena in future work.Dissertation/ThesisDoctoral Dissertation Computer Science 201
    corecore