11,042 research outputs found
Recommended from our members
Scoping a vision for formative e-assessment: a project report for JISC
Assessment is an integral part of teaching and learning. If the relationship between teaching and learning were causal, i. e. if students always mastered the intended learning outcomes of a particular sequence of instruction, assessment would be superfluous. Experience and research suggest this is not the case: what is learnt can often be quite different from what is taught. Formative assessment is motivated by a concern with the elicitation of relevant information about student understanding and / or achievement, its interpretation and an exploration of how it can lead to actions that result in better learning. In the context of a policy drive towards technology-enhanced approaches to teaching and learning, the question of the role of digital technologies is key and it is the latter on which this project particularly focuses. The project and its deliverables have been informed by recent and relevant literature, in particular recent work by Black andIn this work, they put forward a framework which suggests that assessment for learning their term for formative assessment can be conceptualised as consisting of a number of aspects and five keystrategies. The key aspects revolve around the where the learner is going, where the learner is right now and how she can get there and examines the role played by the teacher, peers and the learner. Language: English Keywords: assessments, case studies, design patterns, e-assessmen
Increasing Engagement with the Library via Gamification
One of the main challenges faced by providers of interactive information access systems is to engage users in the use their systems. The library sector in particular can benefit significantly from increased user engagement. In this short paper, we present a preliminary analysis of a university library system that aims to trigger users' extrinsic motivation to increase their interaction with the system. Results suggest that different user groups react in different ways to such 'gamified' systems
Framework to Enhance Teaching and Learning in System Analysis and Unified Modelling Language
Cowling, MA ORCiD: 0000-0003-1444-1563; Munoz Carpio, JC ORCiD: 0000-0003-0251-5510Systems Analysis modelling is considered foundational for Information and Communication Technology (ICT) students, with introductory and advanced units included in nearly all ICT and computer science degrees. Yet despite this, novice systems analysts (learners) find modelling and systems thinking quite difficult to learn and master. This makes the process of teaching the fundamentals frustrating and time intensive. This paper will discuss the foundational problems that learners face when learning Systems Analysis modelling. Through a systematic literature review, a framework will be proposed based on the key problems that novice learners experience. In this proposed framework, a sequence of activities has been developed to facilitate understanding of the requirements, solutions and incremental modelling. An example is provided illustrating how the framework could be used to incorporate visualization and gaming elements into a Systems Analysis classroom; therefore, improving motivation and learning. Through this work, a greater understanding of the approach to teaching modelling within the computer science classroom will be provided, as well as a framework to guide future teaching activities
Digital communities: context for leading learning into the future?
In 2011, a robust, on-campus, three-element Community of Practice model consisting of growing community, sharing of practice and building domain knowledge was piloted in a digital learning environment. An interim evaluation of the pilot study revealed that the three-element framework, when used in a digital environment, required a fourth element. This element, which appears to happen incidentally in the face-to-face context, is that of reflecting, reporting and revising. This paper outlines the extension of the pilot study to the national tertiary education context in order to explore the implications for the design, leadership roles, and selection of appropriate technologies to support and sustain digital communities using the four-element model
Data Showcases: the Data Journal in a Multimodal World
As an experiment, the Research Data Journal for the Humanities and Social Sciences (RDJ) has temporarily extended the usual format of the online journal with so-called âshowcasesâ, separate web pages containing a quick introduction to a dataset, embedded multimedia, interactive components, and facilities to directly preview and explore the dataset described. The aim was to create a coherent hyper document with content communicated via different media (multimodality) and provide space for new forms of scientific publication such as executable papers (e.g. Jupyter notebooks). This paper discusses the objectives, technical implementations, and the need for innovation in data publishing considering the advanced possibilities of today's digital modes of communication. The data showcases experiment proved to be a useful starting point for an exploration of related developments within and outside the humanities and social sciences. It turns out that small-scale experiments are relatively easy to perform thanks to the easy availability of digital technology. However, real innovation in publishing affects organization and infrastructure and requires the joint effort of publishers, editors, data repositories, and authors. It implies a thorough update of the concept of publication and adaptation of the production process. This paper also pays attention to these obstacles to taking new paths
Evaluation campaigns and TRECVid
The TREC Video Retrieval Evaluation (TRECVid) is an
international benchmarking activity to encourage research
in video information retrieval by providing a large test collection, uniform scoring procedures, and a forum for organizations interested in comparing their results. TRECVid completed its fifth annual cycle at the end of 2005 and in 2006 TRECVid will involve almost 70 research organizations, universities and other consortia. Throughout its existence, TRECVid has benchmarked both interactive and automatic/manual searching for shots from within a video
corpus, automatic detection of a variety of semantic and
low-level video features, shot boundary detection and the
detection of story boundaries in broadcast TV news. This
paper will give an introduction to information retrieval (IR) evaluation from both a user and a system perspective, highlighting that system evaluation is by far the most prevalent type of evaluation carried out. We also include a summary of TRECVid as an example of a system evaluation benchmarking campaign and this allows us to discuss whether
such campaigns are a good thing or a bad thing. There are
arguments for and against these campaigns and we present
some of them in the paper concluding that on balance they
have had a very positive impact on research progress
A taxonomy for deriving business insights from user-generated content
Deriving business insights from user-generated content (UGC) is a widely investigated phenomenon in information systems (IS) research. Due to its unstructured nature and technical constraints, UGC is still underutilized as a data source in research and practice. Using recent advancements in machine learning research, especially large language models (LLMs), IS researchers can possibly derive these insights more effectively. To guide and further understand the usage of these techniques, we develop a taxonomy that provides an overview of business insights derived from UGC. The taxonomy helps both practitioners and researchers identify, design, compare and evaluate the use of UGC in this IS context. Finally, we showcase an LLM-supported demo application that derives novel business insights and apply the taxonomy to it. In doing so, we show exemplary how LLMs can be used to develop new or extend existing NLP applications in the realm of IS
- âŠ