19 research outputs found

    Group Interaction Frontiers in Technology

    Get PDF
    Over the last decade, the study of group behavior for multimodal interaction technologies has increased. However, we believe that despite its potential benefits on society, there could be more activity in this area. The aim of this workshop is create a forum for more interdisciplinary dialogue on this topic to enable the acceleration of growth. The workshop has been very successful in attracting submissions addressing important facets in the context of technologies for analyzing and aiding groups. This paper provides a summary of the activities of the workshop and the accepted papers

    Interpreting Models of Social Group Interactions in Meetings with Probabilistic Model Checking

    Get PDF
    A major challenge in Computational Social Science consists in modelling and explaining the temporal dynamics of human communication. Understanding small group interactions can help shed light on sociological and social psychological questions relating to human communications. Previous work showed how Markov rewards models can be used to analyse group interaction in meeting. We explore further the potential of these models by formulating queries over interaction as probabilistic temporal logic properties and analysing them with probabilistic model checking. For this study, we analyse a dataset taken from a standard corpus of scenario and non-scenario meetings and demonstrate the expressiveness of our approach to validate expected interactions and identify patterns of interest

    Predicting Students’ Course Performance Based on Learners’ Characteristics via Fuzzy Modelling Approach

    Get PDF
    Frequent assessment allows instructors to ensure students have met the course learning objectives. Due to lack of instructor-student interaction, most of the assessment feedbacks and early interventions are not carried out in the large class size. This study is to proposes a new way of assessing student course performance using a fuzzy modeling approach. The typical steps in designing a fuzzy expert system include specifying the problem, determining linguistic variables, defining fuzzy sets as well as obtaining and constructing fuzzy rules is deployed. An educational expert is interviewed to define the relationship between the factors and student course performance. These steps help to determine the range of fuzzy sets and fuzzy rules in fuzzy reasoning. After the fuzzy assessing system has been built, it is used to compute the course performances of the students. The subject expert is asked to validate and verify system performance. Findings show that the developed system provides a faster and more effective way for instructors to assess the course performances of students in large class sizes.  However, in this study, the system is developed based on 150 historical student data and only a total of six factors related to course performance are considered. It is expected that considering more historical student data and adding more factors as the variables help to increase the accuracy of the system

    Smart Learning Environments y ergonomía: una aproximación al estado de la cuestión

    Get PDF
    Educational technology evolves constantly, in line with the innovative technologies we implement, but always catering for the improvement of teaching and learning. For this, Smart Learning Environments (SLE) emerge as an optimal alternative to traditional teaching as, through ergonomics, an inclusive outlook which is bound to enhance the educational experience of every student is provided. The method utilized is based on a systematic review of the existing literature which has allowed us to analyze in depth a final sample of 19 documents after an initial review of 633, being these all the works published between 2013 and 2019. Therefore, the principal objective of the present work is carrying out an analysis of the state of the art in relation to ergonomics, inclusiveness and the SLE. The analysis of results is performed utilizing a semantic network, generated through atlas.ti. v.8, by means of which 3 categories, 10 codes and 33 quotes are extracted. Namely, the results reveal the emerging nature of the thematic line researched and how ergonomics is linked to inclusiveness and stands out as one of the most defining components when designing an educational proposal based on SLE.La tecnología educativa se ve de forma continua transformada en función de las tecnologías innovadoras que vamos incorporando, pero siempre con la vista puesta en la mejora del proceso de enseñanza y aprendizaje. Para ello, los Smart Learning Environments (SLE) se convierten en una alternativa óptima a la enseñanza tradicional, puesto que a través de la ergonomía se brinda una perspectiva inclusiva que mejorará la experiencia educativa de cualquier estudiante. Por lo tanto, el principal objetivo de este trabajo consiste en realizar un análisis del estado del arte en relación con la ergonomía, la inclusión y los SLE. El método utilizado se basa en una revisión sistemática de literatura que nos ha permitido analizar en profundidad una muestra final de 19 documentos tras una revisión inicial de 633, habiendo sido todos los trabajos publicados entre 2013 y 2019. El análisis de los resultados se realiza a través de una red semántica generada con atlas.ti. v.8, a partir de la cual se extraen 3 categorías, 10 códigos y 33 citas. Principalmente, los resultados reflejan el carácter emergente de la línea temática investigada y cómo la ergonomía se relaciona con la inclusión y se posiciona como uno de los principales componentes para diseñar una propuesta educativa basada en los SLE

    Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home

    Full text link
    Enriching the quality of early childhood education with interactive math learning at home systems, empowered by recent advances in conversational AI technologies, is slowly becoming a reality. With this motivation, we implement a multimodal dialogue system to support play-based learning experiences at home, guiding kids to master basic math concepts. This work explores Spoken Language Understanding (SLU) pipeline within a task-oriented dialogue system developed for Kid Space, with cascading Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU) components evaluated on our home deployment data with kids going through gamified math learning activities. We validate the advantages of a multi-task architecture for NLU and experiment with a diverse set of pretrained language representations for Intent Recognition and Entity Extraction tasks in the math learning domain. To recognize kids' speech in realistic home environments, we investigate several ASR systems, including the commercial Google Cloud and the latest open-source Whisper solutions with varying model sizes. We evaluate the SLU pipeline by testing our best-performing NLU models on noisy ASR output to inspect the challenges of understanding children for math learning in authentic homes.Comment: Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA) at ACL 202

    Automatically Detecting Confusion and Conflict During Collaborative Learning Using Linguistic, Prosodic, and Facial Cues

    Full text link
    During collaborative learning, confusion and conflict emerge naturally. However, persistent confusion or conflict have the potential to generate frustration and significantly impede learners' performance. Early automatic detection of confusion and conflict would allow us to support early interventions which can in turn improve students' experience with and outcomes from collaborative learning. Despite the extensive studies modeling confusion during solo learning, there is a need for further work in collaborative learning. This paper presents a multimodal machine-learning framework that automatically detects confusion and conflict during collaborative learning. We used data from 38 elementary school learners who collaborated on a series of programming tasks in classrooms. We trained deep multimodal learning models to detect confusion and conflict using features that were automatically extracted from learners' collaborative dialogues, including (1) language-derived features including TF-IDF, lexical semantics, and sentiment, (2) audio-derived features including acoustic-prosodic features, and (3) video-derived features including eye gaze, head pose, and facial expressions. Our results show that multimodal models that combine semantics, pitch, and facial expressions detected confusion and conflict with the highest accuracy, outperforming all unimodal models. We also found that prosodic cues are more predictive of conflict, and facial cues are more predictive of confusion. This study contributes to the automated modeling of collaborative learning processes and the development of real-time adaptive support to enhance learners' collaborative learning experience in classroom contexts.Comment: 27 pages, 7 figures, 7 table
    corecore