320 research outputs found
Recommended from our members
An architecture for the automated detection of textual indicators of reflection
Manual annotation of evidence of reflection expressed in texts is time consuming, especially as fine-grained models of reflection require extensive training of coders, otherwise resulting in low inter-coder reliability. Automated reflection detection provides a solution to this problem. Within this paper, a new basic architecture for detecting evidence of reflection is proposed that allows for automated marking up of written accounts of certain, observable elements of reflection. Furthermore, three promising example annotators of elements of reflection are identified, implemented, and demonstrated: detecting reflective keywords, premise and conclusions of arguments, and questions. It appears that automated detection of reflections bears the potential to support learning with technology at least on three levels: it can foster creating awareness of the reflectivity of own writings, it can help in becoming aware of reflective writings of others, and it can make visible reflective writings of learning networks as a whole
Recommended from our members
Keywords of written reflection - a comparison between reflective and descriptive datasets
This study investigates reflection keywords by contrasting two datasets, one of reflective sentences and another of descriptive sentences. The log-likelihood statistic reveals several reflection keywords that are discussed in the context of a model for reflective writing. These keywords are seen as a useful building block for tools that can automatically analyse reflection in texts
Comparing automatically detected reflective texts with human judgements
This paper reports on the descriptive results of an experiment comparing automatically detected reflective and not-reflective texts against human judgements. Based on the theory of reflective writing assessment and their operationalisation five elements of reflection were defined. For each element of reflection a set of indicators was developed, which automatically annotate texts regarding reflection based on the parameterisation with authoritative texts. Using a large blog corpus 149 texts were retrieved, which were either annotated as reflective or notreflective. An online survey was then used to gather human judgements for these texts. These two data sets were used to compare the quality of the reflection detection algorithm with human judgments. The analysis indicates the expected difference between reflective and not reflective texts
Recommended from our members
Automated Analysis of Reflection in Writing: Validating Machine Learning Approaches
Reflective writing is an important educational practice to train reflective thinking. Currently, researchers must manually analyze these writings, limiting practice and research because the analysis is time and resource consuming. This study evaluates whether machine learning can be used to automate this manual analysis. The study investigates eight categories that are often used in models to assess reflective writing, and the evaluation is based on 76 student essays (5,080 sentences) that are largely from third- and second-year health, business, and engineering students. To test the automated analysis of reflection in writings, machine learning models were built based on a random sample of 80% of the sentences. These models were then tested on the remaining 20% of the sentences. Overall, the standardized evaluation shows that five out of eight categories can be detected automatically with substantial or almost perfect reliability, while the other three categories can be detected with moderate reliability (Cohen's κ ranges between .53 and .85). The accuracies of the automated analysis were on average 10% lower than the accuracies of the manual analysis. These findings enable reflection analytics that is immediate and scalable
Reflective Writing Analytics - Empirically Determined Keywords of Written Reflection
Despite their importance for educational practice, reflective writings are still manually analysed and assessed, posing a constraint on the use of this educational technique. Recently, research started to investigate automated approaches for analysing reflective writing. Foundational to many automated approaches is the knowledge of words that are important for the genre. This research presents keywords that are specific to several categories of a reflective writing model. These keywords have been derived from eight datasets, which contain several thousand instances using the log-likelihood method. Both performance measures, the accuracy and the Cohen's κ, for these keywords were estimated with ten-fold cross validation. The results reached an accuracy of 0.78 on average for all eight categories and a fair to good inter-rater reliability for most categories even though it did not make use of any sophisticated rule-based mechanisms or machine learning approaches. This research contributes to the development of automated reflective writing analytics that are based on data-driven empirical foundations
Recommended from our members
Automated detection of reflection in texts. A machine learning based approach
Promoting reflective thinking is an important educational goal. A common educational practice is to provide opportunities for learners to express their reflective thoughts in writing. The analysis of such text with regard to reflection is mainly a manual task that employs the principles of content analysis.
Considering the amount of text produced by online learning systems, tools that automatically analyse text with regard to reflection would greatly benefit research and practice.
Previous research has explored the potential of dictionary-based approaches that automatically map keywords to categories associated with reflection. Other automated methods use manually constructed rules to gauge insight from text. Machine learning has shown potential for classifying text with regard to reflection-related constructs. However, not much is known of whether machine learning can be used to reliably analyse text with regard to the categories of reflective writing models.
This thesis investigates the reliability of machine learning algorithms to detect reflective thinking in text. In particular, it studies whether text segments from student writings can be analysed automatically to detect the presence (or absence) of reflective writing model categories.
A synthesis of the models of reflective writing is performed to determine the categories frequently used to analyse reflective writing. For each of these categories, several machine learning algorithms are evaluated with regard to their ability to reliably detect reflective writing categories.
The evaluation finds that many of the categories can be predicted reliably. The automated method, however, does not achieve the same level of reliability as humans do
Understanding Accessibility as a Process through the Analysis of Feedback from Disabled Students
Accessibility cannot be fully achieved through adherence to technical guidelines, and must include processes that take account of the diverse contexts and needs of individuals. A complex yet important aspect of this is to understand and utilise feedback from disabled users of systems and services. Open comment feedback can complement other practices in providing rich data from user perspectives, but this presents challenges for analysis at scale. In this paper, we analyse a large dataset of open comment feedback from disabled students on their online and distance learning experience, and we explore opportunities and challenges in the analysis of this data. This includes the automated and manual analysis of content and themes, and the integration of information about the respondent alongside their feedback. Our analysis suggests that procedural themes, such as changes to the individual over time, and their experiences of interpersonal interactions, provide key examples of areas where feedback can lead to insight for the improvement of accessibility. Reflecting on this analysis in the context of our institution, we provide recommendations on the analysis of feedback data, and how feedback can be better embedded into organisational processes
- …