15 research outputs found
Beyond recommendation acceptance: explanation’s learning effects in a math recommender system
Recommender systems can provide personalized advice on learning for individual students. Providing explanations of those recommendations are expected to increase the transparency and persuasiveness of the system, thus improve students’ adoption of the recommendation. Little research has explored the explanations’ practical effects on learning performance except for the acceptance of recommended learning activities. The recommendation explanations can improve the learning performance if the explanations are designed to contribute to relevant learning skills. This study conducted a comparative experiment (N = 276) in high school classrooms, aiming to investigate whether the use of an explainable math recommender system improves students’ learning performance. We found that the presence of the explanations had positive effects on students’ learning improvement and perceptions of the systems, but not the number of solved quizzes during the learning task. These results imply the possibility that the recommendation explanations may affect students’ meta-cognitive skills and their perceptions, which further contribute to students’ learning improvement. When separating the students based on their prior math abilities, we found a significant correlation between the number of viewed recommendations and the final learning improvement for the students with lower math abilities. This indicates that the students with lower math abilities may benefit from reading their learning progress indicated in the explanations. For students with higher math abilities, their learning improvement was more related to the behavior to select and solve recommended quizzes, which indicates a necessity of more sophisticated and interactive recommender system
An Automatic Self-explanation Sample Answer Generation with Knowledge Components in a Math Quiz
Part of the Lecture Notes in Computer Science book series (LNCS, volume 13356)Little research has addressed how systems can use the learning process of self-explanation to provide scaffolding or feedback. Here, we propose a model automatically generating sample self-explanations with knowledge components required to solve a math quiz. The proposed model contains three steps: vectorization, clustering, and extraction. In an experiment using 1434 self-explanation answers from 25 quizzes, we found 72% of the quizzes generated sample answers with all necessary knowledge components. The similarity between human-created and machine-generated sentences was 0.719, with a significant correlation of R = 0.48 for the best performing generation model by BERTScore. These results suggest that our model can generate sample answers with the necessary key knowledge components and be further improved by using the BERTScore
Unsupervised techniques for generating a standard sample self-explanation answer with knowledge components in a math quiz
Self-explanation is a widely recognized and effective pedagogical method. Previous research has indicated that self-explanation can be used to evaluate students’ comprehension and identify their areas of difficulty on mathematical quizzes. However, most analytical techniques necessitate pre-labeled materials, which limits the potential for large-scale study. Conversely, utilizing collected self-explanations without supervision is challenging because there is little research on this topic. Therefore, this study aims to investigate the feasibility of automatically generating a standardized self-explanation sample answer from unsupervised collected self-explanations. The proposed model involves preprocessing and three machine learning steps: vectorization, clustering, and extraction. Experiments involving 1, 434 self-explanation answers from 25 quizzes indicate that 72% of the quizzes generate sample answers containing all the necessary knowledge components. The similarity between human-generated and machine-generated sentences was significant with moderate positive correlation, r(23) = .48, p < .05.The best-performing generative model also achieved a high BERTScore of 0.715. Regarding the readability of the generated sample answers, the average score of the human-generated sentences was superior to that of the machine-generated ones. These results suggest that the proposed model can generate sample answers that contain critical knowledge components and can be further improved with BERTScore. This study is expected to have numerous applications, including identifying students’ areas of difficulty, scoring self-explanations, presenting students with reference materials for learning, and automatically generating scaffolding templates to train self-explanation skills
Automated labeling of PDF mathematical exercises with word N-grams VSM classification
In recent years, smart learning environments have become central to modern education and support students and instructors through tools based on prediction and recommendation models. These methods often use learning material metadata, such as the knowledge contained in an exercise which is usually labeled by domain experts and is costly and difficult to scale. It recognizes that automated labeling eases the workload on experts, as seen in previous studies using automatic classification algorithms for research papers and Japanese mathematical exercises. However, these studies didn’t delve into fine-grained labeling. In addition to that, as the use of materials in the system becomes more widespread, paper materials are transformed into PDF formats, which can lead to incomplete extraction. However, there is less emphasis on labeling incomplete mathematical sentences to tackle this problem in the previous research. This study aims to achieve precise automated classification even from incomplete text inputs. To tackle these challenges, we propose a mathematical exercise labeling algorithm that can handle detailed labels, even for incomplete sentences, using word n-grams, compared to the state-of-the-art word embedding method. The results of the experiment show that mono-gram features with Random Forest models achieved the best performance with a macro F-measure of 92.50%, 61.28% for 24-class labeling and 297-class labeling tasks, respectively. The contribution of this research is showing that the proposed method based on traditional simple n-grams has the ability to find context-independent similarities in incomplete sentences and outperforms state-of-the-art word embedding methods in specific tasks like classifying short and incomplete texts
EXAIT: Educational eXplainable Artificial Intelligent Tools for personalized learning
As artificial intelligence systems increasingly make high-stakes recommendations and decisions automatically in many facets of our lives, the use of explainable artificial intelligence to inform stakeholders about the reasons behind such systems has been gaining much attention in a wide range of fields, including education. Also, in the field of education there has been a long history of research into self-explanation, where students explain the process of their answers. This has been recognized as a beneficial intervention to promote metacognitive skills, however, there is also unexplored potential to gain insight into the problems that learners experience due to inadequate prerequisite knowledge and skills that are required, or in the process of their application to the task at hand. While this aspect of self-explanation has been of interest to teachers, there is little research into the use of such information to inform educational AI systems. In this paper, we propose a system in which both students and the AI system explain to each other their reasons behind decisions that were made, such as: self-explanation of student cognition during the answering process, and explanation of recommendations based on internal mechanizes and other abstract representations of model algorithms
Enhancing Automated Scoring of Math Self-Explanation Quality Using LLM-Generated Datasets: A Semi-Supervised Approach
In the realm of mathematics education, self-explanation stands as a crucial learning mechanism, allowing learners to articulate their comprehension of intricate mathematical concepts and strategies. As digital learning platforms grow in prominence, there are mounting opportunities to collect and utilize mathematical self-explanations. However, these opportunities are met with challenges in automated evaluation. Automatic scoring of mathematical self-explanations is crucial for preprocessing tasks, including the categorization of learner responses, identification of common misconceptions, and the creation of tailored feedback and model solutions. Nevertheless, this task is hindered by the dearth of ample sample sets. Our research introduces a semi-supervised technique using the large language model (LLM), specifically its Japanese variant, to enrich datasets for the automated scoring of mathematical self-explanations. We rigorously evaluated the quality of self-explanations across five datasets, ranging from human-evaluated originals to ones devoid of original content. Our results show that combining LLM-based explanations with mathematical material significantly improves the model’s accuracy. Interestingly, there is an optimal limit to how many synthetic self-explanation data can benefit the system. Exceeding this limit does not further improve outcomes. This study thus highlights the need for careful consideration when integrating synthetic data into solutions, especially within the mathematics discipline
Beyond recommendation acceptance: explanation’s learning effects in a math recommender system
Recommender systems can provide personalized advice on learning for individual students. Providing explanations of those recommendations are expected to increase the transparency and persuasiveness of the system, thus improve students’ adoption of the recommendation. Little research has explored the explanations’ practical effects on learning performance except for the acceptance of recommended learning activities. The recommendation explanations can improve the learning performance if the explanations are designed to contribute to relevant learning skills. This study conducted a comparative experiment (N = 276) in high school classrooms, aiming to investigate whether the use of an explainable math recommender system improves students’ learning performance. We found that the presence of the explanations had positive effects on students’ learning improvement and perceptions of the systems, but not the number of solved quizzes during the learning task. These results imply the possibility that the recommendation explanations may affect students’ meta-cognitive skills and their perceptions, which further contribute to students’ learning improvement. When separating the students based on their prior math abilities, we found a significant correlation between the number of viewed recommendations and the final learning improvement for the students with lower math abilities. This indicates that the students with lower math abilities may benefit from reading their learning progress indicated in the explanations. For students with higher math abilities, their learning improvement was more related to the behavior to select and solve recommended quizzes, which indicates a necessity of more sophisticated and interactive recommender system
In vivo tracking of transplanted macrophages with near infrared fluorescent dye reveals temporal distribution and specific homing in the liver that can be perturbed by clodronate liposomes.
Macrophages play an indispensable role in both innate and acquired immunity, while the persistence of activated macrophages can sometimes be harmful to the host, resulting in multi-organ damage. Macrophages develop from monocytes in the circulation. However, little is known about the organ affinity of macrophages in the normal state. Using in vivo imaging with XenoLight DiR®, we observed that macrophages showed strong affinity for the liver, spleen and lung, and weak affinity for the gut and bone marrow, but little or no affinity for the kidney and skin. We also found that administered macrophages were still alive 168 hours after injection. On the other hand, treatment with clodronate liposomes, which are readily taken up by macrophages via phagocytosis, strongly reduced the number of macrophages in the liver, spleen and lung