7,625 research outputs found
An Investigation into the Pedagogical Features of Documents
Characterizing the content of a technical document in terms of its learning
utility can be useful for applications related to education, such as generating
reading lists from large collections of documents. We refer to this learning
utility as the "pedagogical value" of the document to the learner. While
pedagogical value is an important concept that has been studied extensively
within the education domain, there has been little work exploring it from a
computational, i.e., natural language processing (NLP), perspective. To allow a
computational exploration of this concept, we introduce the notion of
"pedagogical roles" of documents (e.g., Tutorial and Survey) as an intermediary
component for the study of pedagogical value. Given the lack of available
corpora for our exploration, we create the first annotated corpus of
pedagogical roles and use it to test baseline techniques for automatic
prediction of such roles.Comment: 12th Workshop on Innovative Use of NLP for Building Educational
Applications (BEA) at EMNLP 2017; 12 page
Practical and Ethical Challenges of Large Language Models in Education: A Systematic Scoping Review
Educational technology innovations leveraging large language models (LLMs)
have shown the potential to automate the laborious process of generating and
analysing textual content. While various innovations have been developed to
automate a range of educational tasks (e.g., question generation, feedback
provision, and essay grading), there are concerns regarding the practicality
and ethicality of these innovations. Such concerns may hinder future research
and the adoption of LLMs-based innovations in authentic educational contexts.
To address this, we conducted a systematic scoping review of 118 peer-reviewed
papers published since 2017 to pinpoint the current state of research on using
LLMs to automate and support educational tasks. The findings revealed 53 use
cases for LLMs in automating education tasks, categorised into nine main
categories: profiling/labelling, detection, grading, teaching support,
prediction, knowledge representation, feedback, content generation, and
recommendation. Additionally, we also identified several practical and ethical
challenges, including low technological readiness, lack of replicability and
transparency, and insufficient privacy and beneficence considerations. The
findings were summarised into three recommendations for future studies,
including updating existing innovations with state-of-the-art models (e.g.,
GPT-3/4), embracing the initiative of open-sourcing models/systems, and
adopting a human-centred approach throughout the developmental process. As the
intersection of AI and education is continuously evolving, the findings of this
study can serve as an essential reference point for researchers, allowing them
to leverage the strengths, learn from the limitations, and uncover potential
research opportunities enabled by ChatGPT and other generative AI models
Monitoring Conceptual Development: Design Considerations of a Formative Feedback tool
This paper presents the design considerations of a tool aiming at providing formative feedback. The tool uses Latent Semantic Analysis, to automatically generate reference models and provide learners a means to compare their conceptual development against these models. The design of the tool considers a theoretical background which combines research on expertise development, knowledge creation, and conceptual development assessment. The paper also illustrates how the tool will work using a problem and solution scenario, and presents initial validations results. Finally the paper draws conclusions and future work
Pedagogically-driven Ontology Network for Conceptualizing the e-Learning Assessment Domain
The use of ontologies as tools to guide the generation, organization and personalization of e-learning content, including e-assessment, has drawn attention of the researchers because ontologies can represent the knowledge of a given domain and researchers use the ontology to reason about it. Although the use of these semantic technologies tends to enhance technology-based educational processes, the lack of validation to improve the quality of learning in their use makes the educator feel reluctant to use them. This paper presents progress in the development of an ontology network, called AONet, that conceptualizes the e-assessment domain with the aim of supporting the semi-automatic generation of assessment, taking into account not only technical aspects but also pedagogical ones.Fil: Romero, Lucila. Universidad Nacional del Litoral; ArgentinaFil: North, Matthew. The college of Idabo; Estados UnidosFil: Gutierrez, Milagros. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Centro de Investigación y Desarrollo de Ingeniería en Sistemas de Información; ArgentinaFil: Caliusco, Maria Laura. Universidad Tecnológica Nacional. Facultad Regional Santa Fe. Centro de Investigación y Desarrollo de Ingeniería en Sistemas de Información; Argentina. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - Santa Fe; Argentin
Using Argument-based Features to Predict and Analyse Review Helpfulness
We study the helpful product reviews identification problem in this paper. We
observe that the evidence-conclusion discourse relations, also known as
arguments, often appear in product reviews, and we hypothesise that some
argument-based features, e.g. the percentage of argumentative sentences, the
evidences-conclusions ratios, are good indicators of helpful reviews. To
validate this hypothesis, we manually annotate arguments in 110 hotel reviews,
and investigate the effectiveness of several combinations of argument-based
features. Experiments suggest that, when being used together with the
argument-based features, the state-of-the-art baseline features can enjoy a
performance boost (in terms of F1) of 11.01\% in average.Comment: 6 pages, EMNLP201
Using Argument-based Features to Predict and Analyse Review Helpfulness
We study the helpful product reviews identification problem in this paper. We
observe that the evidence-conclusion discourse relations, also known as
arguments, often appear in product reviews, and we hypothesise that some
argument-based features, e.g. the percentage of argumentative sentences, the
evidences-conclusions ratios, are good indicators of helpful reviews. To
validate this hypothesis, we manually annotate arguments in 110 hotel reviews,
and investigate the effectiveness of several combinations of argument-based
features. Experiments suggest that, when being used together with the
argument-based features, the state-of-the-art baseline features can enjoy a
performance boost (in terms of F1) of 11.01\% in average.Comment: 6 pages, EMNLP201
D3.1 – First Bundle of Core Social Agency Assets
This deliverable presents and describes the first delivery of assets that are part of the core social agency bundle. In total, the bundle includes 16 assets, divided into 4 main categories. Each category is related to a type of challenge that developers of applied games are typically faced with and the aim of the included assets is to provide solutions to those challenges.
The main goal of this document is to provide the reader with a description for each included asset, accompanied by links to their source code, distributable versions, demonstrations and documentation. A short discussion of what are the future steps for each asset is also given.
The primary audience for the contents of this deliverable are the game developers, both inside and outside of the project, which can use this document as an official list of the current social agency assets and their associated resources. Note that the information about which RAGE use cases are using which of these assets is described in Deliverable 4.2.This study is part of the RAGE project. The RAGE project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 644187. This publication reflects only the author's view. The European Commission is not responsible for any use that may be made of the information it contains
- …