2 research outputs found
Recommended from our members
The realities of evaluating educational technology in school settings
HCI researchers are increasingly interested in the evaluation of educational technologies in context, yet acknowledge that challenges remain regarding the logistical, material and methodological constraints of this approach to research [
18
,
53
].
Through the analysis of the authors’ contributed thematic research vignettes, the following paper exposes the practical realities of evaluating educational technologies in school settings. This includes insights into the planning stages of evaluation, the relationship between the researcher and the school environment, and the impact of the school context on the data collection process.
We conclude by providing an orientation for the design of HCI educational technology research undertaken in school contexts, providing guidance such as considering the role of modular research design, clarifying goals and expectations with school partners, and reporting researcher positionality.</p
Transformer architecture-based transfer learning forpoliteness prediction in conversation
Politeness is an essential part of a conversation. Like verbal communication, politeness in textual conversation and social media posts is also stimulating. Therefore, the automatic detection of politeness is a significant and relevant problem. The existing literature generally employs classical machine learning-based models like naive Bayes and Support Vector-based trained models for politeness prediction. This paper exploits the state-of-the-art (SOTA) transformer architecture and transfer learning for respectability prediction. The proposed model employs the strengths of context-incorporating large language models, a feed-forward neural network, and an attention mechanism for representation learning of natural language requests. The trained representation is further classified using a softmax function into polite, impolite, and neutral classes. We evaluate the presented model employing two SOTA pre-trained large language models on two benchmark datasets. Our model outperformed the two SOTA and six baseline models, including two domain-specific transformer-based models using both the BERT and RoBERTa language models. The ablation investigation shows that the exclusion of the feed-forward layer displays the highest impact on the presented model. The analysis reveals the batch size and optimization algorithms as effective parameters affecting the model performance.</p