5,908 research outputs found
Determining what people feel and think when interacting with humans and machines
Any interactive software program must interpret the users’ actions and come up with an appropriate response that is intelligable and meaningful to the user. In most situations, the options of the user are determined by the software and hardware and the actions that can be carried out are unambiguous. The machine knows what it should do when the user carries out an action. In most cases, the user knows what he has to do by relying on conventions which he may have learned by having had a look at the instruction manual, having them seen performed by somebody else, or which he learned by modifying a previously learned convention. Some, or most, of the times he just finds out by trial and error. In user-friendly interfaces, the user knows, without having to read extensive manuals, what is expected from him and how he can get the machine to do what he wants. An intelligent interface is so-called, because it does not assume the same kind of programming of the user by the machine, but the machine itself can figure out what the user wants and how he wants it without the user having to take all the trouble of telling it to the machine in the way the machine dictates but being able to do it in his own words. Or perhaps by not using any words at all, as the machine is able to read off the intentions of the user by observing his actions and expressions. Ideally, the machine should be able to determine what the user wants, what he expects, what he hopes will happen, and how he feels
Robust Modeling of Epistemic Mental States
This work identifies and advances some research challenges in the analysis of
facial features and their temporal dynamics with epistemic mental states in
dyadic conversations. Epistemic states are: Agreement, Concentration,
Thoughtful, Certain, and Interest. In this paper, we perform a number of
statistical analyses and simulations to identify the relationship between
facial features and epistemic states. Non-linear relations are found to be more
prevalent, while temporal features derived from original facial features have
demonstrated a strong correlation with intensity changes. Then, we propose a
novel prediction framework that takes facial features and their nonlinear
relation scores as input and predict different epistemic states in videos. The
prediction of epistemic states is boosted when the classification of emotion
changing regions such as rising, falling, or steady-state are incorporated with
the temporal features. The proposed predictive models can predict the epistemic
states with significantly improved accuracy: correlation coefficient (CoERR)
for Agreement is 0.827, for Concentration 0.901, for Thoughtful 0.794, for
Certain 0.854, and for Interest 0.913.Comment: Accepted for Publication in Multimedia Tools and Application, Special
Issue: Socio-Affective Technologie
Using Applied Behavior Analysis in Software to help Tutor Individuals with Autism Spectrum Disorder
There are currently many tutoring software systems which have been designed
for neurotypical children. These systems cover academic topics such as reading
and math, and are made available through various technological mediums. The
majority of these systems were not designed for use by children with special
needs, in particular those who are diagnosed with Autism Spectrum Disorder.
Since the 1970's, studies have been conducted on the use of Applied Behavior
Analysis to help autistic children learn [1]. This teaching methodology is
proven to be very effective, with many patients having their diagnosis of
autism dropped after a few years of treatment. With the advent of ubiquitous
technologies such as mobile devices, it has become apparent that these devices
could also be used to help tutor autistic children on academic subjects such as
reading and math. Though the delivery of tutoring material must be made using
Applied Behavior Analysis techniques, given that ABA therapy is currently the
only form of treatment for Autism Spectrum Disorder endorsed by the US Surgeon
General [2], which further makes the case for incorporating it into an
academics tutoring system tailored for autistic children. In this paper, we
present a mobile software system which can be utilized to tutor children who
are diagnosed with Autism Spectrum Disorder in the subjects of reading and
math. The software makes use of Applied Behavior Analysis techniques such as a
Token Economy system, visual and audible reinforcers, and generalization.
Furthermore, we explore how combining Applied Behavior Analysis and technology,
could help extend the reach of tutoring systems to these children.Comment: 8 pages, 7 figure
How Participation in a Peer-Led Writing Center Impacts Struggling Students’ Self-Efficacy and Motivation
Many secondary students struggle with writing, both in terms of skill and confidence. This qualitative case study follows six students who have a history of struggling in English Language Arts class as they undergo a tutoring intervention based on the writing center model of peer tutoring. Students were observed in seven writing sessions which took place at multiple stages of the writing process and with informational, narrative, and analytical writing assignments. Through interview and observation, the researcher examines how students’ self-efficacy and motivation shift over the course of the intervention. Students who began with low self-efficacy and low motivation were shown to have increased in both components through the tutoring process; students with high self-efficacy and low motivation did not experience the same positive impact
Knowledge Elicitation Methods for Affect Modelling in Education
Research on the relationship between affect and cognition in Artificial Intelligence in Education (AIEd) brings an important dimension to our understanding of how learning occurs and how it can be facilitated. Emotions are crucial to learning, but their nature, the conditions under which they occur, and their exact impact on learning for different learners in diverse contexts still needs to be mapped out. The study of affect during learning can be challenging, because emotions are subjective, fleeting phenomena that are often difficult for learners to report accurately and for observers to perceive reliably. Context forms an integral part of learners’ affect and the study thereof. This review provides a synthesis of the current knowledge elicitation methods that are used to aid the study of learners’ affect and to inform the design of intelligent technologies for learning. Advantages and disadvantages of the specific methods are discussed along with their respective potential for enhancing research in this area, and issues related to the interpretation of data that emerges as the result of their use. References to related research are also provided together with illustrative examples of where the individual methods have been used in the past. Therefore, this review is intended as a resource for methodological decision making for those who want to study emotions and their antecedents in AIEd contexts, i.e. where the aim is to inform the design and implementation of an intelligent learning environment or to evaluate its use and educational efficacy
Associating Facial Expressions and Upper-Body Gestures with Learning Tasks for Enhancing Intelligent Tutoring Systems
Learning involves a substantial amount of cognitive, social and emotional states. Therefore, recognizing and understanding these states in the context of learning is key in designing informed interventions and addressing the needs of the individual student to provide personalized education. In this paper, we explore the automatic detection of learner’s nonverbal behaviors involving hand-over-face gestures, head and eye movements and emotions via facial expressions during learning. The proposed computer vision-based behavior monitoring method uses a low-cost webcam and can easily be integrated with modern tutoring technologies. We investigate these behaviors in-depth over time in a classroom session of 40 minutes involving reading and problem-solving exercises. The exercises in the sessions are divided into three categories: an easy, medium and difficult topic within the context of undergraduate computer science. We found that there is a significant increase in head and eye movements as time progresses, as well as with the increase of difficulty level. We demonstrated that there is a considerable occurrence of hand-over-face gestures (on average 21.35%) during the 40 minutes session and is unexplored in the education domain. We propose a novel deep learning approach for automatic detection of hand-over-face gestures in images with a classification accuracy of 86.87%. There is a prominent increase in hand-over-face gestures when the difficulty level of the given exercise increases. The hand-over-face gestures occur more frequently during problem-solving (easy 23.79%, medium 19.84% and difficult 30.46%) exercises in comparison to reading (easy 16.20%, medium 20.06% and difficult 20.18%)
Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD
Learning Opportunities and Challenges of Sensor-enabled Intelligent Tutoring Systems on Mobile Platforms: Benchmarking the Reliability of Mobile Sensors to Track Human Physiological Signals and Behaviors to Enhance Tablet-Based Intelligent Tutoring Systems
Desktop-based intelligent tutoring systems have existed for many decades, but the advancement of mobile computing technologies has sparked interest in developing mobile intelligent tutoring systems (mITS). Personalized mITS are applicable to not only stand-alone and client-server systems but also cloud systems possibly leveraging big data. Device-based sensors enable even greater personalization through capture of physiological signals during periods of student study. However, personalizing mITS to individual students faces challenges. The Achilles heel of personalization is the feasibility and reliability of these sensors to accurately capture physiological signals and behavior measures. This research reviews feasibility and benchmarks reliability of basic mobile platform sensors in various student postures. The research software and methodology are generalizable to a range of platforms and sensors. Incorporating the tile-based puzzle game 2048 as a substitute for a knowledge domain also enables a broad spectrum of test populations. Baseline sensors include the on-board camera to detect eyes/faces and the Bluetooth Empatica E4 wristband to capture heart rate, electrodermal activity (EDA), and skin temperature. The test population involved 100 collegiate students randomly assigned to one of three different ergonomic positions in a classroom: sitting at a table, standing at a counter, or reclining on a sofa. Well received by the students, EDA proved to be more reliable than heart rate or face detection in the three different ergonomic positions. Additional insights are provided on advancing learning personalization through future sensor feasibility and reliability studies
South Carolina Educational Interpreting Center Annual Report
Clemson University and its partners at the South Carolina State Department of Education and the South Carolina School for the Deaf and the Blind manage the South Carolina Educational Interpreting Center (SCEIC) at the University Center in Greenville, South Carolina. The SCEIC provides national performance and knowledge assessments, mentoring and educational opportunities for South Carolina Educational Interpreters. This annual report details the SCEIC outputs and outcomes for Educational Interpreters in the state for the 2017-2018 academic year
- …