19,234 research outputs found

    Students’ Perceptions of and Responses to Teaching Assistant and Peer Feedback

    Get PDF
    Authentic open-ended problems are increasingly appearing in university classrooms at all levels. Formative feedback that leads to learning and improved student work products is a challenge, particularly in large enrollment courses. This is a case study of one first-year engineering student team’s experience with teaching assistant and peer feedback during a series of open-ended mathematical modeling problems called Model-Eliciting Activities. The goal of this study was to gain deep insight into the interactions between students, feedback providers, and written feedback by examining one team’s perceptions of the feedback they received and the changes they made to their solutions based on their feedback. The practical purpose of this work is to begin to make recommendations to improve students’ interactions with written feedback. The data sources consisted of individual student interviews, videos of the team’s meetings to revise their solutions, the team’s iteratively-developed solutions, the team’s documented changes to the their solutions, and the written feedback they received from their teaching assistant and peers. The students explained that helpful peer feedback requires a time commitment, focuses on the mathematical model, and goes beyond praise to prompt change. The students also stated that generic TA feedback was not helpful. The greatest difference between the students’ perceptions of TA and peer feedback was that the TA had influence over the team’s grade and therefore the TA feedback was deemed more important. Feedback strategies to increase peer participation and improve teaching assistant training are described. Suggestions for continued research on feedback are provided

    Characteristics of Feedback that Influence Student Confidence and Performance during Mathematical Modeling

    Get PDF
    This study focuses on characteristics of written feedback that influence students’ performance and confidence in addressing the mathematical complexity embedded in a Model-Eliciting Activity (MEA). MEAs are authentic mathematical modeling problems that facilitate students’ iterative development of solutions in a realistic context. We analyzed 132 first-year engineering students’ confidence levels and mathematical model scores on aMEA(pre and post feedback), along with teaching assistant feedback given to the students. The findings show several examples of affective and cognitive feedback that students reported that they used to revise their models. Students’ performance and confidence in developing mathematical models can be increased when they are in an environment where they iteratively develop models based on effective feedback

    Selecting Effective Examples to Train Students for Peer Review of Open‐Ended Problem Solutions

    Get PDF
    Background Students conducting peer review on authentic artifacts require training. In the training studied here, individual students reviewed (score and provide feedback on) a randomly selected prototypical solution to a problem. Afterwards, they are shown a side-by-side comparison of their review and an expert’s review, along with prompts to reflect on the differences and similarities. Individuals were then assigned a peer team’s solution to review. Purpose This paper explores how the characteristics of five different prototypical solutions used in training (and their associated expert evaluations) impacted students’ abilities to score peer teams’ solutions. Design/Method An expert rater scored the prototypical solutions and 147 student teams’ solutions that were peer reviewed using an eight item rubric. Differences between the scores assigned by the expert and a student to a prototypical solution and an actual team solution were used to compute a measure of the student’s improvement as a peer reviewer from training to actual peer review. ANOVA testing with Tukey’s post-hoc analysis was done to identify statistical differences in improvement based on the prototypical solutions students saw during the training phase. Results Statistically significant differences were found in the amount of error a student made during peer review between high and low quality prototypical solutions seen by students during training. Specifically, a lower quality training solution (and associated expert evaluation) resulted in more accurate scoring during peer review. Conclusions While students typically ask to see exemplars of “good solutions”, this research suggests that there is likely greater value, for the purpose of preparing students to score peers’ solutions, in students seeing a low-quality solution and its corresponding expert review

    Development of First-Year Engineering Teams\u27 Mathematical Models through Linked Modeling and Simulation Projects

    Get PDF
    The development and use of mathematical models and simulations underlies much of the work of engineers. Mathematical models describe a situation or system through mathematics, quantification, and pattern identification. Simulations enable users to interact with models through manipulation of input variables and visualization of model outputs. Although modeling skills are fundamental, they are rarely explicitly taught in engineering. Model-eliciting activities (MEAs) represent a pedagogical approach used in engineering to teach students mathematical modeling skills through the development of a model to solve an authentic problem. This study is an investigation into the impact of linking a MEA and a simulation-building project on students’ model development. The purpose of this research is to further address the need for developing effective curricula to teach students’ mathematical modeling skills and begin to address the need to teach students about simulations. The data for this study were 122 first-year engineering student teams’ solutions to both a MEA and a subsequent simulation-building project set in the context of a nanotechnology topic, specifically quantum dot solar cells. The teams’ mathematical models submitted at the end of the MEA and the simulation project were analyzed using two frameworks to assess the quality of the mathematical models and the level of simulation completeness. Three teams’ works with the feedback they received were analyzed in a case study. The analysis of the 122 teams’ mathematical models showed that many teams selected particular aspects of their final MEA models for further development in their simulations. Based on the components of the models that were consistent in the MEA and project submissions, teams either improved, did not change, or weakened aspects of their models. Twenty-six teams improved the functionality of their model. Six teams increased the input variable handling of their models. Two teams improved the efficiency of their models; eight teams made their models less efficient through poor programming decisions. Based on an analysis of the 122 teams’ simulations, 62 percent were complete simulations (i.e. backed by a model and front-ended with user-input and output visualization capabilities). The case study enabled a more detailed analysis of how select teams’ mathematical models changed across their submissions and the evidence of potential deeper learning about their models across their submissions. The findings of this study suggest that model development continued through simulation development enables student teams an opportunity to either further improve or explore their models. These sequential projects provide teams with low quality models with more time for development and application within a simulation. They provide teams with high quality models an opportunity to explore ideas beyond the original scope of the MEA

    The role of pedagogical tools in active learning: a case for sense-making

    Full text link
    Evidence from the research literature indicates that both audience response systems (ARS) and guided inquiry worksheets (GIW) can lead to greater student engagement, learning, and equity in the STEM classroom. We compare the use of these two tools in large enrollment STEM courses delivered in different contexts, one in biology and one in engineering. The instructors studied utilized each of the active learning tools differently. In the biology course, ARS questions were used mainly to check in with students and assess if they were correctly interpreting and understanding worksheet questions. The engineering course presented ARS questions that afforded students the opportunity to apply learned concepts to new scenarios towards improving students conceptual understanding. In the biology course, the GIWs were primarily used in stand-alone activities, and most of the information necessary for students to answer the questions was contained within the worksheet in a context that aligned with a disciplinary model. In the engineering course, the instructor intended for students to reference their lecture notes and rely on their conceptual knowledge of fundamental principles from the previous ARS class session in order to successfully answer the GIW questions. However, while their specific implementation structures and practices differed, both instructors used these tools to build towards the same basic disciplinary thinking and sense-making processes of conceptual reasoning, quantitative reasoning, and metacognitive thinking.Comment: 20 pages, 5 figure

    Facilitating Teaching And Research On Open Ended Problem Solving Through The Development Of A Dynamic Computer Tool

    Get PDF
    Model Eliciting Activities (MEAs) are realistic open-ended problems set in engineering contexts; student teams draw on their diverse experiences both in and out of the classroom to develop a mathematical model explicated in a memo to the client. These activities have been implemented in a required first-year engineering course with enrollments of as many as 1700 students in a given semester. The earliest MEA implementations had student teams write a single solution to a problem in the form of a memo to the client and receive feedback from their TA. For research purposes, a simple static online submission form, a static feedback form, and a single database table were quickly developed. Over time, research revealed that students need multiple feedback, revision, and reflection points to address misconceptions and achieve high quality solutions. As a result, the toolset has been expanded, patched, and re-patched multiple developers to increase both the functionality and the security of the system. Because the class is so large and the implementation sequence involved is not trivial, the technology has become a necessary to successfully manage the implementation of MEAs in the course. The resulting system has become a kluge of bloated inflexible code that now requires a part time graduate student to manage the deployment of 2-4 MEAs per semester. New functions are desired but are either not compatible or are too cumbersome to implement under the existing architecture. Based on this, a new system is currently being developed to allow for greater flexibility, easier expandability, and expanded functionality. The largest feature-set being developed for the new system are the administrative tools to ease the deployment process. Other features being planned are the ability to have students upload files and images as part of their solution. This paper will describe the history of the MEA Learning System (MEALS) and the lessons learned about developing custom teaching and research software, and will explore how the development of custom software tools can be used to facilitate the dual roles of teaching and educational research

    Open-Ended Modeling Group Projects in Introductory Statics and Dynamics Courses

    Get PDF
    Traditionally, the types of problems that students see in their introductory statics and dynamics courses are well-structured textbook problems with a single solution [1]. These types of questions are often seen by students as being somewhat at-odds with the more “realistic” challenges that they may face in their design or lab courses. Additionally, in the pandemic-necessitated paradigm of emergency online instruction, methods of assessment beyond traditional exams have become more emphasized, both as a way of keeping students engaged by giving the material relevance and of ensuring that the work that they present is their own when so many solutions are available online
    • 

    corecore