12 research outputs found

    Modality and Timing of Team Feedback: Implications for GIFT

    Get PDF
    This paper discusses considerations relevant to the design of team feedback in intelligent tutoring systems (ITSs). While team tutoring is a goal for the Generalized Intelligent Framework for Tutoring (GIFT), further research must be done to explore the focus, modalities, and timing of feedback for teams. Alt-hough there have been a number of studies on feedback, there are a limited number of studies on feedback for teams. This theoretical paper leverages previous research on ITSs, training, individual feedback, and teamwork models to inform appropriate decisions about the most effective feedback mechanisms for teams. Finally, the implications of team feedback on the design of GIFT are discussed. Teams have the ability to achieve goals that are unobtainable by individuals alone. It is important to implement effective training for teams to support performance effectiveness. An important element of training is feedback. Feedback has the function of guiding or motivating individuals based on their past performance. The purpose of guiding feedback is to direct an individual to a desired behavior. The purpose of motivational feedback is to motivate the individual by mentioning future rewards (Ilgen, Fisher & Taylor, 1979). Although there have been a number of studies on feedback, there are a limited number of studies on feedback for teams. A common theme among these studies is determining whether feedback should be given at an individual or team level (Tindale, 1989). Some studies for teams suggest that team performance is influenced by feedback on an individual level (Berkowitz & Levy, 1956) and some studies suggest that groups outperform individuals when feedback is given to the entire team after each decision is made (Tindale, 1989). The purpose of the current paper is to characterize the range of modalities of feedback, timing of feedback, focus level of feedback, and who should receive feedback (i.e., individual vs. feedback) for teams to assist in the design of feedback for ITSs for teams. Finally, the implications of team feedback on the design of GIFT is discussed

    Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare

    Get PDF
    Precision Medicine implies a deep understanding of inter-individual differences in health and disease that are due to genetic and environmental factors. To acquire such understanding there is a need for the implementation of different types of technologies based on artificial intelligence (AI) that enable the identification of biomedically relevant patterns, facilitating progress towards individually tailored preventative and therapeutic interventions. Despite the significant scientific advances achieved so far, most of the currently used biomedical AI technologies do not account for bias detection. Furthermore, the design of the majority of algorithms ignore the sex and gender dimension and its contribution to health and disease differences among individuals. Failure in accounting for these differences will generate sub-optimal results and produce mistakes as well as discriminatory outcomes. In this review we examine the current sex and gender gaps in a subset of biomedical technologies used in relation to Precision Medicine. In addition, we provide recommendations to optimize their utilization to improve the global health and disease landscape and decrease inequalities.This work is written on behalf of the Women’s Brain Project (WBP) (www.womensbrainproject.com/), an international organization advocating for women’s brain and mental health through scientific research, debate and public engagement. The authors would like to gratefully acknowledge Maria Teresa Ferretti and Nicoletta Iacobacci (WBP) for the scientific advice and insightful discussions; Roberto Confalonieri (Alpha Health) for reviewing the manuscript; the Bioinfo4Women programme of Barcelona Supercomputing Center (BSC) for the support. This work has been supported by the Spanish Government (SEV 2015–0493) and grant PT17/0009/0001, of the Acción Estratégica en Salud 2013–2016 of the Programa Estatal de Investigación Orientada a los Retos de la Sociedad, funded by the Instituto de Salud Carlos III (ISCIII) and European Regional Development Fund (ERDF). EG has received funding from the Innovative Medicines Initiative 2 (IMI2) Joint Undertaking under grant agreement No 116030 (TransQST), which is supported by the European Union’s Horizon 2020 research and innovation programme and the European Federation of Pharmaceutical Industries and Associations (EFPIA).Peer ReviewedPostprint (published version

    Modelling e-learner comprehension within a conversational intelligent tutoring system

    Get PDF
    Conversational Intelligent Tutoring Systems (CITS) are agent based e-learning systems which deliver tutorial content through discussion, asking and answering questions, identifying gaps in knowledge and providing feedback in natural language. Personalisation and adaptation for CITS are current research focuses in the field. Classroom studies have shown that experienced human tutors automatically, through experience, estimate a learner’s level of subject comprehension during interactions and modify lesson content, activities and pedagogy in response. This paper introduces Hendrix 2.0, a novel CITS capable of classifying e-learner comprehension in real-time from webcam images. Hendrix 2.0 integrates a novel image processing and machine learning algorithm, COMPASS, that rapidly detects a broad range of non-verbal behaviours, producing a time-series of comprehension estimates on a scale from -1.0 to +1.0. This paper reports an empirical study of comprehension classification accuracy, during which 51 students at Manchester Metropolitan University undertook conversational tutoring with Hendrix 2.0. The authors evaluate the accuracy of strong comprehension and strong non-comprehension classifications, during conversational questioning. The results show that the COMPASS comprehension classifier achieved normalised classification accuracy of 75%

    Near real-time comprehension classification with artificial neural networks: decoding e-Learner non-verbal behaviour

    Get PDF
    Comprehension is an important cognitive state for learning. Human tutors recognise comprehension and non-comprehension states by interpreting learner non-verbal behaviour (NVB). Experienced tutors adapt pedagogy, materials and instruction to provide additional learning scaffold in the context of perceived learner comprehension. Near real-time assessment for e-learner comprehension of on-screen information could provide a powerful tool for both adaptation within intelligent e-learning platforms and appraisal of tutorial content for learning analytics. However, literature suggests that no existing method for automatic classification of learner comprehension by analysis of NVB can provide a practical solution in an e-learning, on-screen, context. This paper presents design, development and evaluation of COMPASS, a novel near real-time comprehension classification system for use in detecting learner comprehension of on-screen information during e-learning activities. COMPASS uses a novel descriptive analysis of learner behaviour, image processing techniques and artificial neural networks to model and classify authentic comprehension indicative non-verbal behaviour. This paper presents a study in which 44 undergraduate students answered on-screen multiple choice questions relating to computer programming. Using a front-facing USB web camera the behaviour of the learner is recorded during reading and appraisal of on-screen information. The resultant dataset of non-verbal behaviour and question-answer scores has been used to train artificial neural network (ANN) to classify comprehension and non-comprehension states in near real-time. The trained comprehension classifier achieved normalised classification accuracy of 75.8%

    Integrating knowledge tracing and item response theory: A tale of two frameworks

    Get PDF
    Traditionally, the assessment and learning science commu-nities rely on different paradigms to model student performance. The assessment community uses Item Response Theory which allows modeling different student abilities and problem difficulties, while the learning science community uses Knowledge Tracing, which captures skill acquisition. These two paradigms are complementary - IRT cannot be used to model student learning, while Knowledge Tracing assumes all students and problems are the same. Recently, two highly related models based on a principled synthesis of IRT and Knowledge Tracing were introduced. However, these two models were evaluated on different data sets, using different evaluation metrics and with different ways of splitting the data into training and testing sets. In this paper we reconcile the models' results by presenting a unified view of the two models, and by evaluating the models under a common evaluation metric. We find that both models are equivalent and only differ in their training procedure. Our results show that the combined IRT and Knowledge Tracing models offer the best of assessment and learning sciences - high prediction accuracy like the IRT model, and the ability to model student learning like Knowledge Tracing

    Improving Non-Player Character Interaction Using Physiological Data

    Get PDF
    Non-player characters (NPCs) in video games have very little information about the player's current state. This disconnect leads to less than ideal interactions between the human and the computer. Measurements of a human's physiological state have been used to drive a wide range of software interactions, such as biofeedback applications. The usage of physiological data in games has been very limited, mainly to adjustments in difficulty based on stress levels. However, research with virtual agents has shown how useful physiological data can be in human-computer interaction. Based on these findings, this thesis assesses the usefulness of physiological signals for the interaction with NPCs. Measurements of skin conductance on the fingers and facial muscle tension serves as a means to estimate the player's emotional state at any given time. This data is then used to adjust the behavior of non-player characters in so far as their dialogue acknowledges the player's emotion. An experimental evaluation of the developed system showed that using a combination of electromyography and electrodermal activity to estimate human emotion affords non-player characters with more information about the player, demonstrating the viability of the approach. In the small sample of the evaulation, there was no significant difference for a questionnaire measure of rapport with the NPCs. However, the qualitative feedback in the questionnaires showed a clear difference in the perception of the system's use of physiological information. How this information can be used most effectively by non-player characters should be explored further in future research.M.S., Digital Media -- Drexel University, 201

    Mitigating User Frustration through Adaptive Feedback based on Human-Automation Etiquette Strategies

    Get PDF
    The objective of this study is to investigate the effects of feedback and user frustration in human-computer interaction (HCI) and examine how to mitigate user frustration through feedback based on human-automation etiquette strategies. User frustration in HCI indicates a negative feeling that occurs when efforts to achieve a goal are impeded. User frustration impacts not only the communication with the computer itself, but also productivity, learning, and cognitive workload. Affect-aware systems have been studied to recognize user emotions and respond in different ways. Affect-aware systems need to be adaptive systems that change their behavior depending on users’ emotions. Adaptive systems have four categories of adaptations. Previous research has focused on primarily function allocation and to a lesser extent information content and task scheduling. However, the fourth approach, changing the interaction styles is the least explored because of the interplay of human factors considerations. Three interlinked studies were conducted to investigate the consequences of user frustration and explore mitigation techniques. Study 1 showed that delayed feedback from the system led to higher user frustration, anger, cognitive workload, and physiological arousal. In addition, delayed feedback decreased task performance and system usability in a human-robot interaction (HRI) context. Study 2 evaluated a possible approach of mitigating user frustration by applying human-human etiquette strategies in a tutoring context. The results of Study 2 showed that changing etiquette strategies led to changes in performance, motivation, confidence, and satisfaction. The most effective etiquette strategies changed when users were frustrated. Based on these results, an adaptive tutoring system prototype was developed and evaluated in Study 3. By utilizing a rule set derived from Study 2, the tutor was able to use different automation etiquette strategies to target and improve motivation, confidence, satisfaction, and performance using different strategies, under different levels of user frustration. This work establishes that changing the interaction style alone of a computer tutor can affect a user’s motivation, confidence, satisfaction, and performance. Furthermore, the beneficial effect of changing etiquette strategies is greater when users are frustrated. This work provides a basis for future work to develop affect-aware adaptive systems to mitigate user frustration

    The Usefulness of Multi-Sensor Affect Detection on User Experience: An Application of Biometric Measurement Systems on Online Purchasing

    Get PDF
    abstract: Traditional usability methods in Human-Computer Interaction (HCI) have been extensively used to understand the usability of products. Measurements of user experience (UX) in traditional HCI studies mostly rely on task performance and observable user interactions with the product or services, such as usability tests, contextual inquiry, and subjective self-report data, including questionnaires, interviews, and usability tests. However, these studies fail to directly reflect a user’s psychological involvement and further fail to explain the cognitive processing and the related emotional arousal. Thus, capturing how users think and feel when they are using a product remains a vital challenge of user experience evaluation studies. Conversely, recent research has revealed that sensor-based affect detection technologies, such as eye tracking, electroencephalography (EEG), galvanic skin response (GSR), and facial expression analysis, effectively capture affective states and physiological responses. These methods are efficient indicators of cognitive involvement and emotional arousal and constitute effective strategies for a comprehensive measurement of UX. The literature review shows that the impacts of sensor-based affect detection systems to the UX can be categorized in two groups: (1) confirmatory to validate the results obtained from the traditional usability methods in UX evaluations; and (2) complementary to enhance the findings or provide more precise and valid evidence. Both provided comprehensive findings to uncover the issues related to mental and physiological pathways to enhance the design of product and services. Therefore, this dissertation claims that it can be efficient to integrate sensor-based affect detection technologies to solve the current gaps or weaknesses of traditional usability methods. The dissertation revealed that the multi-sensor-based UX evaluation approach through biometrics tools and software corroborated user experience identified by traditional UX methods during an online purchasing task. The use these systems enhanced the findings and provided more precise and valid evidence to predict the consumer purchasing preferences. Thus, their impact was “complementary” on overall UX evaluation. The dissertation also provided information of the unique contributions of each tool and recommended some ways user experience researchers can combine both sensor-based and traditional UX approaches to explain consumer purchasing preferences.Dissertation/ThesisDoctoral Dissertation Human Systems Engineering 201

    Inferência de estados afetivos em ambientes educacionais : proposta de um modelo híbrido baseado em informações cognitivas e físicas

    Get PDF
    Orientador: Prof. Dr. Andrey Ricardo PimentelDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa : Curitiba, 11/12/2018Inclui referências: p.119-127Área de concentração: Ciência da ComputaçãoResumo: Na comunidade científica é comum o entendimento de que os softwares educacionais precisam evoluir para garantir um suporte mais efetivo ao processo de aprendizagem. Uma limitação recorrente destes softwares refere-se à falta de funcionalidades de adaptação às reações afetivas dos estudantes. Esta limitação torna-se relevante pois as emoções têm influencia direta no processo de aprendizagem. Reconhecer as emoções dos estudantes é o primeiro passo em direção a construção de software educativos sensíveis ao afeto. Trabalhos correlatos reportam relativo sucesso na tarefa de reconhecimento automático das emoções dos estudantes. No entanto, grande parte dos trabalhos correlatos utiliza sensores pouco práticos, intrusivos e caros que normalmente monitoram apenas reações físicas. No contexto educacional o conjunto de emoções a ser considerado no processo de reconhecimento deve observar as singularidades deste domínio. Sendo assim, neste trabalho a inferência é realizada utilizando uma abordagem de quadrantes formadas pelas dimensões valência e ativação. Estes quadrantes representam situações relevantes para a aprendizagem e podem ser utilizados para embasar adaptações no ambiente computacional. Diante disto, esta pesquisa apresenta a proposta de um modelo híbrido de inferência de emoções de estudantes durante o uso softwares educacionais. Este modelo tem como principal característica a utilização simultânea de informações oriundas de reações físicas (expressões faciais) e cognitivas (eventos no software educacional). Esta abordagem fundamenta-se na perspectiva teórica de que as emoções humanas são fortemente relacionadas com reações físicas, mas também são influenciadas por processos racionais ou cognitivos. A combinação de expressões faciais e informações sobre os eventos do software educacional permite a construção de uma solução de baixo custo e intrusividade. Além disso, esta solução apresenta viabilidade de utilização em larga escala e em ambientes reais de ensino. Experimentos realizados com estudantes em um ambiente real de ensino demonstraram a viabilidade desta proposta. Este fato é importante, considerando-se que a abordagem proposta neste trabalho é pouco explorada na comunidade científica e requer a fusão de informações bastante distintas. Nestes experimentos, foram obtidas acurácia e índice Cohen Kappa próximas de 66% e 0,55, respectivamente, na tarefa de inferência de cinco classes de emoções. Embora esses resultados sejam promissores quando comparados a trabalhos correlatos, entende-se que eles podem ser aprimorados no futuro, incorporando-se novos dados ao modelo proposto. Palavras-chave: Inferência de Emoção, Emoção Relacionada à Aprendizagem, Tutoria Afetiva, Computação Afetiva.Abstract: In the scientific community there is a common understanding that educational software must evolve to ensure more effective support to the learning process. A common limitation of these software refers to the lack of adaptive features to students' affective reactions. This limitation becomes relevant because the emotions have a direct influence on the learning process. Recognizing students' emotions is the first step toward building affect-sensitive educational software. Related work reports relatively successful in the task of automatically recognize students' emotions. However, most studies use impractical, intrusive and expensive sensors that typically monitor only physical reactions. In the educational context the set of emotions to be considered in the recognition process must observe the singularities of this domain. Thus, in this work the inference is performed using a quadrant approach formed by the valence and activation dimensions. These quadrants represent situations relevant to learning and can be used to support adaptations in the computational environment. So, this research presents a proposal of a hybrid model to infer emotions of students while using educational software. This model has as its main feature the simultaneous use of information coming from physical reactions (facial expressions) and cognitive (events in the educational software). This approach is based on the theoretical perspective that human emotions are strongly related with physical reactions, but are also influenced by rational or cognitive processes. Combining facial expressions and information about the events of educational software allows the construction of a low-cost and intrusiveness solution. In addition, this solution presents feasibility for use in large scale in real learning environments. Experiments with students in a real classroom demonstrated the feasibility of this proposal. This is important, considering that the approach proposed in this work is little explored in the scientific community and requires the fusion of quite different information. In these experiments, accuracy and Cohen Kappa index close to 66% and 0,55, respectively, were obtained in the inference of five emotion classes. Although these results are promising when compared to related works, it is understood that they can be improved in the future by incorporating new data into the proposed model. Keywords: Emotion inference, Learning related emotion, Affective tutoring, Affective Computing
    corecore