286 research outputs found
A Contextualised General Systems Theory
A system is something that can be separated from its surrounds, but this definition leaves much scope for refinement. Starting with the notion of measurement, we explore increasingly contextual system behaviour and identify three major forms of contextuality that might be exhibited by a system: (1) between components; (2) between system and experimental method; and (3) between a system and its environment. Quantum theory is shown to provide a highly useful formalism from which all three forms of contextuality can be analysed, offering numerous tests for contextual behaviour, as well as modelling possibilities for systems that do indeed display it. I conclude with the introduction of a contextualised general systems theory based on an extension of this formalism
Non-compositional concepts and quantum tests
Compositionality is a frequently made assumption in linguistics, and yet many human subjects reveal highly non-compositional word associations when confronted with novel concept combinations. This article will show how a non-compositional account of concept combinations can be supplied by modelling them as interacting quantum systems. © 2012 American Institute of Physics
Embracing imperfection in learning analytics
© 2018 Copyright held by the owner/author(s). Learning Analytics (LA) sits at the confluence of many contributing disciplines, which brings the risk of hidden assumptions inherited from those fields. Here, we consider a hidden assumption derived from computer science, namely, that improving computational accuracy in classification is always a worthy goal. We demonstrate that this assumption is unlikely to hold in some important educational contexts, and argue that embracing computational “imperfection” can improve outcomes for those scenarios. Specifically, we show that learner-facing approaches aimed at “learning how to learn” require more holistic validation strategies. We consider what information must be provided in order to reasonably evaluate algorithmic tools in LA, to facilitate transparency and realistic performance comparisons
Towards the Discovery of Learner Metacognition from Reflective Writing
Modern society demands renewed attention on the competencies required to best equip students for a dynamic and uncertain future. We present exploratory work based on the premise that metacognitive and reflective competencies are essential for this task. Bringing the concepts of metacognition and reflection together into a conceptual model within which we conceived of them as both a set of similar features, and as a spectrum ranging from the unconscious inner-self through to the conscious, external, social self. This model was used to guide exploratory computational analysis of 6,090 instances of reflective writing authored by undergraduate students. We found the conceptual model useful in informing the computational analysis, which in turn showed potential for automating the discovery of metacognitive activity in reflective writing, an approach that holds promise for the generation of formative feedback for students as they work towards developing core 21st century competencies
Beyond Average: Contemporary statistical techniques for analysing student evaluations of teaching
© 2018, © 2018 Informa UK Limited, trading as Taylor & Francis Group. Student evaluations of teaching (SETs) have been used to evaluate higher education teaching performance for decades. Reporting SET results often involves the extraction of an average for some set of course metrics, which facilitates the comparison of teaching teams across different organisational units. Here, we draw attention to ongoing problems with the naive application of this approach. Firstly, a specific average value may arise from data that demonstrates very different patterns of student satisfaction. Furthermore, the use of distance measures (e.g. an average) for ordinal data can be contested, and finally, issues of multiplicity increasingly plague approaches using hypothesis testing. It is time to advance the methodology of the field. We demonstrate how multinomial distributions and hierarchical Bayesian methods can be used to contextualise the SET scores of a course to different organisational units and student cohorts, and then show how this approach can be used to extract sensible information about how a distribution is changing
RiPPLE: A crowdsourced adaptive platform for recommendation of learning activities
© 2019, UTS ePRESS. All rights reserved. This paper presents a platform called RiPPLE (Recommendation in Personalised Peer-Learning Environments) that recommends personalized learning activities to students based on their knowledge state from a pool of crowdsourced learning activities that are generated by educators and the students themselves. RiPPLE integrates insights from crowdsourcing, learning sciences, and adaptive learning, aiming to narrow the gap between these large bodies of research while providing a practical platform-based implementation that instructors can easily use in their courses. This paper provides a design overview of RiPPLE, which can be employed as a standalone tool or embedded into any learning management system (LMS) or online platform that supports the Learning Tools Interoperability (LTI) standard. The platform has been evaluated based on a pilot in an introductory course with 453 students at The University of Queensland. Initial results suggest that the use of the RiPPLE platform led to measurable learning gains and that students perceived the platform as beneficially supporting their learning
Enhancing the ethical use of learning analytics in Australian higher education
Ensuring the ethical use of data about students is an important consideration in the use of learning analytics in Australian higher education. In early 2019 a discussion paper was published by a group of learning analytics specialists in the sector to help promote the conversation around the key ethical issues institutions need to address in order to ensure the ethical use of learning analytics. This panel session will explore these ethical issues in more detail and update the conversation with new perspectives and provocations. The panel will include authors of the discussion paper and structured so the audience will have an active role in considering the key issues and advancing the ongoing conversations about these important issues
Framing Professional Learning Analytics as Reframing Oneself
Central to imagining the future of technology-enhanced professional learning is the question of how data are gathered, analyzed, and fed back to stakeholders. The field of learning analytics (LA) has emerged over the last decade at the intersection of data science, learning sciences, human-centered and instructional design, and organizational change, and so could in principle inform how data can be gathered and analyzed in ways that support professional learning. However, in contrast to formal education where most research in LA has been conducted, much work-integrated learning is experiential, social, situated, and practice-bound. Supporting such learning exposes a significant weakness in LA research, and to make sense of this gap, this article proposes an adaptation of the Knowledge-Agency Window framework. It draws attention to how different forms of professional learning locate on the dimensions of learner agency and knowledge creation. Specifically, we argue that the concept of “reframing oneself” holds particular relevance for informal, work-integrated learning. To illustrate how this insight translates into LA design for professionals, three examples are provided: first, analyzing personal and team skills profiles (skills analytics); second, making sense of challenging workplace experiences (reflective writing analytics); and third, reflecting on orientation to learning (dispositional analytics). We foreground professional agency as a key requirement for such techniques to be used effectively and ethically
Modelling attitudes to climate change — an order effect and a test between alternatives
© Springer International Publishing Switzerland 2015. Quantum-like models can be fruitfully used to model attitude change in a social context. Next steps require data, and higher dimensional models. Here, we discuss an exploratory study that demonstrates an order effect when three question sets about Climate Beliefs, Political Affiliation and Attitudes Towards Science are presented in different orders within a larger study of n = 533 subjects. A quantum-like model seems possible, and we propose a new experiment which could be used to test between three possible models for this scenario
Recommended from our members
Determination of a calculation bias in the MCNP model of the OSTR
Oregon State University is home to a TRIGA® Mark II reactor. In October of 2008, the reactor began operating on low enriched uranium fuel. A model of the facility exists in MCNP, a Monte Carlo code that can be used for criticality calculations. Until now, a bias in the calculation of the neutron multiplication factor has been carried forward from outdated core models.
This work involves updating various aspects of the model, including the geometry of the facility as well as materials and their properties, in order to arrive at a more accurate representation of the facility as it is today. The individual effect that each change has on the results of MCNP calculations of the core is documented.
Following the updates to the model, the model can emulate records that describe the startup of the reactor in October of 2008. The results of these calculations can be compared to actual data in order to establish a foundation for benchmarking the model and characterizing the reactor core. The deviation between calculated and expected results can be used to determine a single reactivity bias in the model.
The bias determined as a result of this work can be applied to future calculations using the model developed as a part of this work
- …