1,620 research outputs found

    Learning and Exposure Affect Environmental Perception Less than Evolutionary Navigation Costs

    Get PDF
    Russell E. Jackson is with University of Idaho, Chéla R. Willey is with University of California Los Angeles, Lawrence K. Cormack is with UT Austin.Most behaviors are conditional upon successful navigation of the environment, which depends upon distance perception learned over repeated trials. Unfortunately, we understand little about how learning affects distance perception–especially in the most common human navigational scenario, that of adult navigation in familiar environments. Further, dominant theories predict mutually exclusive effects of learning on distance perception, especially when the risks or costs of navigation differ. We tested these competing predictions in four experiments in which we also presented evolutionarily relevant navigation costs. Methods included within- and between-subjects comparisons and longitudinal designs in laboratory and real-world settings. Data suggested that adult distance estimation rapidly reflects evolutionarily relevant navigation costs and repeated exposure does little to change this. Human distance perception may have evolved to reflect navigation costs quickly and reliably in order to provide a stable signal to other behaviors and with little regard for objective accuracy.Psycholog

    RSA in Young Adults: Identifying Naturally-Occurring Response Patterns and Correlates

    Get PDF
    Few studies have focused on the joint contributions of baseline and stress-responsive RSA on mental health outcomes, and no research to date has examined naturally-occurring profiles of RSA, which may be more predictive of emotion regulation ability and mental health outcomes than looking at either component of RSA alone. Participants were 235 (87.1% female, 73.6% Caucasian) undergraduates ages 18-39 (M = 19.62, SD = 2.12). In Part 1, latent growth mixture modeling (LGMM) was used to identify naturally-occurring physiological profiles accounting for both resting and stress-reactive RSA among young adults. In Part 2, multivariate ANCOVAs were used to predict 18 variable outcomes, specifically state and trait negative affect, depressive symptoms, and multiple emotion regulation techniques. Part 1 analyses supported the identification of four RSA response profiles described by baseline/slope characteristics: moderate/moderate (N = 183; M[intercept] = 6.72; M[slope] = -1.09), moderate/high (N = 10; M[intercept] = 7.31; M[slope] = -1.71), moderate/augmenting (N = 17; M[intercept] = 6.09; M[slope] = 0.77), and high/moderate (N = 25; M[intercept] = 8.10; M[slope] = -0.99). Part 2 analyses yielded significant results, so effect sizes were utilized to identify trends on outcome variables. The moderate/moderate group appeared to be normative, with both capacity and sufficient response to environmental demands. The moderate/high and moderate/augmenting profiles differed most consistently from all other groups. The moderate/high profile demonstrated generally adaptive outcomes, with lower depression and NA; and higher brooding, social support, and thought suppression. In contrast, the moderate/augmenting profile demonstrated less adaptive emotion regulation overall, showing higher avoidance, acceptance, and thought suppression; and lower problem solving, social support, and expressive suppression. Because the most variable component of the groups was the responsive RSA (e.g., moderate, high, or augmenting), it may be that this is an important defining factor in a profile when considering psychological outcomes. Results support clinicians considering biological strengths and vulnerabilities in case conceptualization, as well as coaching in effective engagement and appropriately modulated responses to life stressors

    Building a community of practice to improve inter marker standardisation and consistency

    Full text link
    Copyright © 2015 SEFI. Over several years the authors have coordinated engineering subjects, with large cohorts of up to 300+ students. In each case, lectures were supported by tutorials. In the larger subjects it was not uncommon to have in excess of 10 tutors, where each tutor is responsible for grading the assessment tasks for students in their tutorial. A common issue faced by lecturers of large multiple tutor subjects is how to achieve a consistent standard of marking between different tutors. To address this issue the authors initially used a number of methods including double-blind marking and remarking. This process was improved by using the benchmarking tool in SPARKPLUS [1] to compare both the grading and feedback provided by different tutors for a number of randomly selected project tasks. In these studies we found that while students' perception of difference in grading was not unfounded, the problem was exacerbated by inconsistencies in the language tutors use when providing feedback. In this paper, we report using new SPARKPLUS features developed as a result of this previous research to quickly establish and build a community of practice amongst subject tutors. We found that in just one session these processes assisted tutors to reach a higher level of shared understanding of the concepts and practices pertinent to the subject assessment activities. In addition, it enabled tutors to gain an appreciation of the grading issues frequently reported by students. This resulted in not only improving both the understanding and skills of tutors but changing the way they both marked and provided feedback

    Improving self- and peer assessment processes with technology

    Full text link
    Purpose - As a way of focusing curriculum development and learning outcomes universities have introduced graduate attributes, which their students should develop during their degree course. Some of these attributes are discipline-specific, others are generic to all professions. The development of these attributes can be promoted by the careful use of self- and peer assessment. The authors have previously reported using the self- and peer assessment software tool SPARK in various contexts to facilitate opportunities to practise, develop, assess and provide feedback on these attributes. This research and that of the other developers identified the need to extend the features of SPARK, to increase its flexibility and capacity to provide feedback. This paper seeks to report the results of the initial trials to investigate the potential of these new features to improve learning outcomes. Design/methodology/approach - The paper reviews some of the key literature with regard to self- and peer assessment, discusses the main aspects of the original online self- and peer assessment tool SPARK and the new version SPARKPLUS, reports and analyses the results of a series of student surveys to investigate whether the new features and applications of the tool have improved the learning outcomes in a large multi-disciplinary Engineering Design subject. Findings - It was found that using self- and peer assessment in conjunction with collaborative peer learning activities increased the benefits to students and improved engagement. Furthermore it was found that the new features available in SPARKPLUS facilitated efficient implementation of additional self- and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes. The trials demonstrated that the tool assisted in improving students' engagement with and learning from peer learning exercises, the collection and distribution of feedback and helping them to identify their individual strengths and weaknesses. Practical implications: SPARKPLUS facilitates the efficient management of self- and peer assessment processes even in large classes, allowing assessments to be run multiple times a semester without an excessive burden for the coordinating academic. While SPARKPLUS has enormous potential to provide significant benefits to both students and academics, it is necessary to caution that, although a powerful tool, its successful use requires thoughtful and reflective application combined with good assessment design. Originality/value - It was found that the new features available in SPARKPLUS efficiently facilitated the development of new self- and peer assessment processes (assessment of individual work and benchmarking exercises) and improved learning outcomes. © Emerald Group Publishing Limited

    Authors' perceptions of peer review of conference papers and how they characterise a 'good' one

    Full text link
    This paper examines the individual's experience of the peer review process to explore implications for the wider engineering education research community. A thematic analysis of interview transcripts showed that providing feedback to authors in reviews was mentioned equally as frequently as the role of quality assurance of the conference papers. We used responses from participants from various levels of expertise and types of universities to identify what were for them the elements of a quality conference paper and a quality review. For a conference paper these included that it should be relevant, situate itself relative to existing literature, state the purpose of the research, describe sound methodology used with a logically developed argument, have conclusions supported by evidence and use language of a professional standard. A quality review should start on a positive note, suggest additional literature, critique the methodology and written expression and unambiguously explain what the reviewer means. The lists of characteristics of a good paper and a good review share elements such as attention to relevant literature and methodology. There is also substantial overlap between how our participants characterise quality papers and reviews and the review criteria used for the AAEE conference, and for such publication outlets as the European Journal for Engineering Education (EJEE) and the Journal of Engineering Education (JEE). This suggests some level of agreement in the community about the elements that indicate quality. However, we need to continue discussions about what we mean by 'sound' methodology and 'good' evidence as well as establishing some shared language and understanding of the standards required in regard to the review criteria. The results of this study represent the first steps in improving our shared understandings of what constitutes quality research in engineering education for our community, and how we might better convey that in offering constructive advice to authors when writing a review of a conference paper. Since the peer review process has implications for the development of individual researchers in the field and hence for the field overall, it seems reasonable to ask reviewers to pay attention to how they write reviews so that they create the potential for engineering academics to successfully transition into this different research paradigm

    Does pre-feedback self reflection improve student engagement, learning outcomes and tutor facilitation of group feedback sessions?

    Full text link
    The authors have previously reported the effectiveness of using self and peer assessment to improve learning outcomes by providing opportunities to practise, assess and provide feedback on students' learning and development. Despite this work and the research of others, we observed some students felt they had nothing to learn from feedback sessions. Hence they missed the opportunity for reflection and to receive feedback to complete the learning cycle. This behaviour suggested that students needed more guidance to facilitate deeper engagement. We hypothesised that student engagement would increase if they were provided with guiding 'feedback catalyst questions' to initiate reflection and facilitate effective feedback on learning outcomes. In this paper we report testing whether this approach assisted students to gain more benefit from the self and peer assessment feedback sessions. In our investigation both students and tutors were asked to evaluate the effectiveness of the feedback catalyst questions in improving student engagement and learning. We found that the pre-feedback self reflection exercise improved learning outcomes and student engagement with more than 80% of students reporting multiple benefits. Furthermore tutors reported that the exercise assisted them to facilitate their sessions. However, not surprisingly the degree of success was related in part to the attitude of the tutor to the exercise. This suggests that while the feedback catalyst questions were extremely effective there is no substitute for enthusiastic and engaging tutorial staff. © 2010 Gardner & Willey

    Threshold exams to promote learning and assurance of learning

    Full text link
    BACKGROUND Formal examinations are often used in engineering classes as the tool to evaluate student learning. These exams are often high stakes assessment tasks and provide no opportunity for feed-forward. Despite academic claims that all topics in their subject are requisite material, students are regularly able to pass these assessment tasks with unsatisfactory, and perhaps even no capacity to demonstrate learning in some topics. Furthermore, while undertaking the exam often highlights to students their learning deficiencies, it typically has no impact on their learning as they rarely receive feedback other than a mark or grade and there is no further opportunity to address these learning gaps. This paper reports on the impact of a two-staged examination process on both student learning and assurance of that learning. PURPOSE The aim of the staged examination process was to improve confidence that students had satisfactory knowledge in all requisite subject topics and to test its capacity to be learning-oriented in that it provides improved opportunities for students to learn while simultaneously increasing the level of learning assurance. DESIGN/METHOD The first stage of the process was an exam that covered all requisite subject topics. This exam consisted of multiple choice questions set at or just above the level of threshold learning outcomes. Students were required to score 80% on this exam to qualify to undertake the second part of the assessment process at a later date. Students used IFAT (Immediate Feedback Assessment Technique) cards for this stage to facilitate immediate feedback as to their strengths and weaknesses. The time between exams allowed students to review identified areas of weakness before attempting the second stage of the exam. Note: while not contributing to their final grade students who failed the first exam were also permitted to undertake the second exam as an opportunity to learn and as a means of evaluating the impact of the process. The second exam consisted of open-ended questions requiring students to explain their critical thinking and judgement used to arrive at their answer. Evaluation of the effectiveness of this process was based on a student survey, focus group discussions and an analysis of student examination scripts. RESULTS The threshold learning outcome exam was effective in improving assurance of learning in that students had to demonstrate satisfactory learning across topics to achieve the 80% required to âpassâ the exam. Furthermore, students reported that they used the opportunity between exams to address identified learning gaps, hence demonstrating the learning orientation and feed-forward capacity of the two stage process. However, the fact that two students who did not achieve the threshold level of 80% in the first exam were able to address their learning gaps and pass the second and harder exam suggests that an alternative to the 80% exclusion criteria should be considered. CONCLUSIONS The study demonstrated that a two staged examination process improved confidence in assurance of learning while providing students with an opportunity to first identify and subsequently reduce learning gaps. However, the fact that some students who failed the threshold exam demonstrated significant improvement in their understanding in the second exam suggests that more research is needed to both understand the impact of and improve the benefits from this activity

    Improving the standard and consistency of multi-tutor grading in large classes

    Full text link
    For several years the authors have coordinated a large engineering design subject, having a typical cohort of more than 300 students per semester. Lectures are supported by tutorials of approximately 32 students that incorporate a combination of collaborative team and project-based learning activities. Each tutor is responsible for grading the assessment tasks for students in their tutorial. A common issue is how to achieve a consistent standard of marking and student feedback between different tutors. To address this issue the authors have used a number of methods including double-blind marking and/or random re-marking to support consistent grading. However, even when only small variations between the overall grading of different tutors were found, students still complained about a perceived lack of consistency. In this paper we report on an investigation into the use of a collaborative peer learning process among tutors to improve mark standardisation, and marker consistency, and to build tutorsâ expertise and capacity in the provision of quality feedback. We found that studentsâ perceptions of differences in grading were exacerbated by inconsistencies in the language tutors use when providing feedback, and by differences in tutorsâ perceptions of how well individual criterion were met

    Changing student's perceptions of self and peer assessment

    Full text link
    The authors have previously reported the effectiveness of using self and peer assessment to improve learning outcomes by providing opportunities to practise, assess and provide feedback on students' learning and development. Despite this work and the research of others, we found a significant number of students perceive self and peer assessment to be an instrument to facilitate fairness, focusing on its free-rider deterrent capacity, rather than providing opportunities for reflection and feedback to complete the learning cycle. We assumed that these perceptions were enforced by the fact that the main use of self and peer assessment was to moderate marks and provide feedback to individuals on their contribution to team tasks. We hypothesised that these perceptions would change if students were provided with opportunities to use self and peer assessment for different purposes. In this paper we report testing this hypothesis by using self and peer assessment multiple times a semester to not only assess team contributions but to assess individual student assignments and in benchmarking exercises. Our aim was to test whether this approach would assist students to gain more benefit from self and peer assessment processes while simultaneously breaking down their narrow focus on fairness. © 2009 Keith Willey & Anne Gardner

    Developing team skills with self- and peer assessment: Are benefits inversely related to team function?

    Full text link
    Purpose - Self- and peer assessment has proved effective in promoting the development of teamwork and other professional skills in undergraduate students. However, in previous research approximately 30 percent of students reported that its use produced no perceived improvement in their teamwork experience. It was hypothesised that a significant number of these students were probably members of a team that would have functioned well without self- and peer assessment and hence the process did not improve their teamwork experience. This paper aims to report the testing of this hypothesis. Design/methodology/approach - The paper reviews some of the literature on self- and peer assessment, outlines the online self- and peer assessment tool SPARKPLUS, and analyses the results of a post-subject survey of students in a large multi-disciplinary engineering design subject. Findings - It was found that students who were neutral as to whether self- and peer assessment improved their teamwork experience cannot be assumed to be members of well-functioning teams. Originality/value - To increase the benefits for all students it is recommended that self- and peer assessment focuses on collaborative peer learning, not just assessment of team contributions. Furthermore, it is recommended that feedback sessions be focused on learning not just assessment outcomes and graduate attribute development should be recorded and tracked by linking development to categories required for professional accreditation. © Emerald Group Publishing Limited
    • …
    corecore