1,110 research outputs found

    Understanding feedback in online learning - A critical review and metaphor analysis

    Get PDF
    Technologies associated with online learning have led to many new feedback practices and expanded the meaning of feedback beyond the traditional focus on instructor comments, but conceptual work on online feedback has not followed. This paper investigates how online learning researchers understand feedback's role in teaching and learning, and discusses how these understandings influence what research questions are asked, and what online feedback practices are recommended. Through a qualitative analysis of the language used about feedback in leading research journals, we identified six distinct understandings of feedback based on six dominant conceptual metaphors. These are feedback is a treatment, feedback is a costly commodity, feedback is coaching, feedback is a command, feedback is a dialogue, and feedback is a learner tool. Each of these metaphors offers a coherent frame of entailments related to the roles and responsibilities of online instructors and online learners as well as some bigger assumptions about what role feedback should play in online teaching and learning. A comparison with current feedback research revealed that just two of the six metaphors align with the learner-centric feedback practices that are increasingly considered appropriate among feedback researchers. The paper discusses how the conceptualizations might reflect different challenges facing online education. The paper proposes that researchers interrogate their own conceptualizations to ensure that they align with their beliefs about feedback and its role in the learning process. It suggests that a more deliberate use of metaphors when conceptualizing feedback and online feedback practices is necessary for clarity of communication and helpful for driving the work on feedback in online learning forward

    Feedback encounters: towards a framework for analysing and understanding feedback processes

    Get PDF
    There is growing agreement that feedback should be understood as a contextual and social process, rather than as receipt of teacher comments on students’ work. This reframing brings with it new complexities, and it can be challenging for researchers and practitioners to adopt a process perspective when making sense of feedback practices in naturalistic settings. This paper takes the nascent notion of feedback encounter and proposes it as an analytical lens for understanding and analysing feedback processes. Based on a rich dataset from a cross-national digital ethnographic study of student feedback experiences, the paper identifies three categories of feedback encounters - elicited, formal and incidental - and explores how they are experienced by students, in relation to perceived usefulness, control and self-exposure. Furthermore, the paper investigates how individual feedback encounters may interconnect to form simple and complex sequences, revolving around distinct uncertainties or dilemmas. This operationalization of feedback encounters builds the foundations of a framework that can help researchers and practitioners make sense of authentic feedback processes in naturalistic settings. Such a framework is useful because it offers a structured way of analysing processes that are inherently complex and unfolding

    Characteristics of productive feedback encounters in online learning

    Get PDF
    Understanding how students engage with feedback is often reduced to a study of feedback messages that sheds little light on effects. Using the emerging notion of feedback encounters as an analytical lens, this study examines what characterizes productive feedback encounters when learning online. Drawing from a cross-national digital ethnographic dataset, a qualitative analysis categorized feedback encounters within this dataset: While most encounters led to instrumental impacts without any significant reflections, students also engaged in encounters with more substantive impact on learning. The latter took place under two conditions. First, the encounter must challenge the student’s assumptions about their work, and they must be able and willing to accept this challenge. Second, the encounter must take place at a time which is appropriate in relation to whichever task the student is currently working on. This highlights design considerations, such as importance of social interactions, and the instrumental enactments of self-generated feedback

    How does assessment drive learning? A focus on students' development of evaluative judgement

    Get PDF
    Summative assessment is often considered a motivator that drives students’ learning. Higher education has a responsibility in promoting lifelong learning and assessment plays an important role in supporting students’ capability to make evaluative judgements about their work and that of others. However, as research often focuses on formal pedagogical design, it is unclear what behaviours summative assessment prompts, thus the relationship between summative assessment, learning and evaluative judgement requires further investigation. Drawing on a small-scale ethnography-informed study, this paper adopts a practice theory approach to explore how undergraduate physics students from three year levels make evaluative judgements in the context of summative assessment tasks. The contexts explored through observations and interviews include a graded in-class tutorial, an out-of-class study group for an in-semester assignment, and individual preparation for examinations. The findings suggest that while summative assessment is a crucial aspect of students’ learning context, it does not fully shape students’ practices. Instead, students engage in incidental conversations about the quality of their work and how to do things in their studies. By focusing on what students actually do, this study integrates formal and informal aspects of students’ learning, highlighting the tensions between undergraduate practices and intended learning outcomes

    Digital ethnography in higher education teaching and learning—a methodological review

    Get PDF
    To understand how the digitalization of higher education influences the inter-relationship between students, teachers, and their broader contexts, research must account for the social, cultural, political, and embodied aspects of teaching and learning in digital environments. Digital ethnography is a research method that can generate rich contextual knowledge of online experiences. However, how this methodology translates to higher education is less clear. In order to explore the opportunities that digital ethnography can provide in higher education research, this paper presents a methodological review of previous research, and discusses the implications for future practice. Through a systematic search of five research databases, we found 20 papers that report using digital ethnographies to explore teaching and learning in higher education. The review synthesizes and discusses how data collection, rigour, and ethics are handled in this body of research, with a focus on the specific methodological challenges that emerge when doing digital ethnographic research in a higher education setting. The review also identifies opportunities for improvement—especially related to participant observation from the student perspective, researcher reflexivity in relation to the dual teacher-researcher role, and increased diversity of data types. This leads us to conclude that higher education research, tasked with understanding an explosion of new digital practices, could benefit from a more rigorous and expanded use of digital ethnography

    How technology shapes assessment design: findings from a study of university teachers

    Get PDF
    A wide range of technologies has been developed to enhance assessment, but adoption has been inconsistent. This is despite assessment being critical to student learning and certification. To understand why this is the case and how it can be addressed, we need to explore the perspectives of academics responsible for designing and implementing technology-supported assessment strategies. This paper reports on the experience of designing technology-supported assessment based on interviews with 33 Australian university teachers. The findings reveal the desire to achieve greater efficiencies and to be contemporary and innovative as key drivers of technology adoption for assessment. Participants sought to shape student behaviors through their designs and made adaptations in response to positive feedback and undesirable outcomes. Many designs required modification because of a lack of appropriate support, leading to compromise and, in some cases, abandonment. These findings highlight the challenges to effective technology-supported assessment design and demonstrate the difficulties university teachers face when attempting to negotiate mixed messages within institutions and the demands of design work. We use these findings to suggest opportunities to improve support by offering pedagogical guidance and technical help at critical stages of the design process and encouraging an iterative approach to design

    Pilot/Controller Coordinated Decision Making in the Next Generation Air Transportation System

    Get PDF
    Introduction: NextGen technologies promise to provide considerable benefits in terms of enhancing operations and improving safety. However, there needs to be a thorough human factors evaluation of the way these systems will change the way in which pilot and controllers share information. The likely impact of these new technologies on pilot/controller coordinated decision making is considered in this paper using the "operational, informational and evaluative disconnect" framework. Method: Five participant focus groups were held. Participants were four experts in human factors, between x and x research students and a technical expert. The participant focus group evaluated five key NextGen technologies to identify issues that made different disconnects more or less likely. Results: Issues that were identified were: Decision Making will not necessarily improve because pilots and controllers possess the same information; Having a common information source does not mean pilots and controllers are looking at the same information; High levels of automation may lead to disconnects between the technology and pilots/controllers; Common information sources may become the definitive source for information; Overconfidence in the automation may lead to situations where appropriate breakdowns are not initiated. Discussion: The issues that were identified lead to recommendations that need to be considered in the development of NextGen technologies. The current state of development of these technologies provides a good opportunity to utilize recommendations at an early stage so that NextGen technologies do not lead to difficulties in resolving breakdowns in coordinated decision making

    A comparative analysis of the skilled use of automated feedback tools through the lens of teacher feedback literacy

    Get PDF
    Effective learning depends on effective feedback, which in turn requires a set of skills, dispositions and practices on the part of both students and teachers which have been termed feedback literacy. A previously published teacher feedback literacy competency framework has identified what is needed by teachers to implement feedback well. While this framework refers in broad terms to the potential uses of educational technologies, it does not examine in detail the new possibilities of automated feedback (AF) tools, especially those that are open by offering varying degrees of transparency and control to teachers. Using analytics and artificial intelligence, open AF tools permit automated processing and feedback with a speed, precision and scale that exceeds that of humans. This raises important questions about how human and machine feedback can be combined optimally and what is now required of teachers to use such tools skillfully. The paper addresses two research questions: Which teacher feedback competencies are necessary for the skilled use of open AF tools? and What does the skilled use of open AF tools add to our conceptions of teacher feedback competencies? We conduct an analysis of published evidence concerning teachers’ use of open AF tools through the lens of teacher feedback literacy, which produces summary matrices revealing relative strengths and weaknesses in the literature, and the relevance of the feedback literacy framework. We conclude firstly, that when used effectively, open AF tools exercise a range of teacher feedback competencies. The paper thus offers a detailed account of the nature of teachers’ feedback literacy practices within this context. Secondly, this analysis reveals gaps in the literature, signalling opportunities for future work. Thirdly, we propose several examples of automated feedback literacy, that is, distinctive teacher competencies linked to the skilled use of open AF tools

    Reframing assessment research: through a practice perspective

    Get PDF
    Assessment as a field of investigation has been influenced by a limited number of perspectives. These have focused assessment research in particular ways that have emphasised measurement, or student learning or institutional policies. The aim of this paper is to view the phenomenon of assessment from a practice perspective drawing upon ideas from practice theory. Such a view places assessment practices as central. This perspective is illustrated using data from an empirical study of assessment decision-making and uses as an exemplar the identified practice of ‘bringing a new assessment task into being’. It is suggested that a practice perspective can position assessment as integral to curriculum practices and end separations of assessment from teaching and learning. It enables research on assessment to de-centre measurement and take account of the wider range of people, phenomena and things that constitute it

    How university teachers design assessments: a cross disciplinary study

    Get PDF
    There are dissonances between educators’ aspirations for assessment design and actual assessment implementation in higher education. Understanding how assessment is designed ‘on the ground’ can assist in resolving this tension. Thirty-three Australian university educators from a mix of disciplines and institutions were interviewed. A thematic analysis of the transcripts indicated that assessment design begins as a response to an impetus for change. The design process itself was shaped by environmental influences, which are the circumstances surrounding the assessment design, and professional influences, which are those factors that the educators themselves bring to the process. A range of activities or tasks were undertaken, including those which were essential to all assessment design, those more selective activities which educators chose to optimise the assessment process in particular ways and meta-design processes which educators used to dynamically respond to environmental influences. The qualitative description indicates the complex social nature of interwoven personal and environmental influences on assessment design and the value of an explicit and strategic ways of thinking within the constraints and affordances of a local environment. This suggests that focussing on relational forms of professional development that develops strategic approaches to assessment may be beneficial. The role of disciplinary approaches may be significant and remains an area for future research
    • …
    corecore