41 research outputs found
Frontier: Discipline-Based Education Research to Advance Authentic Learning in Agricultural and Biological Engineering
Discipline-based education research (DBER) is research activity aimed at investigating âlearning and teaching in a discipline from a perspective that reflects the disciplineâs priorities, worldview, knowledge, and practicesâ for the purpose of producing research-based evidence to improve education in that discipline. DBER arose out of concerns about the quality of post-secondary science education. Physics, chemistry, engineering, biology, geosciences, and astronomy education each have unique DBER histories in the U.S. that date back to the late 1800s or early 1900s when colleges and university systems were expanding and formalizing. The DBER fields accelerated in the 1950s and 1960s as a result of the Space Race and in subsequent periods of concern about national competitiveness. For engineering education specifically, Mann argued for the recognition of engineering education research as a viable research enterprise for engineering faculty in the Mann report, the first major study of the state of engineering education. In 1955, the âencouragement of experiments in all areas of engineering educationâ was a top recommendation in the Grinter Report (1994). Each DBER field, including those that have emerged since the original six, including mathematics, has been on a path to meet the criteria of a full-fledged field of scientific inquiry, as described by Froyd and Lohmann (2014). This means that each field has, or is in the process of acquiring, structure (e.g., national and international societies, academic recognition, and multiple modes of dissemination including archival research journals), distinct research (e.g., knowledge bases, theories, and lines on inquiry), and outcomes (e.g., implications for practice).
. . .
The purpose of this article is to challenge us to build a robust agricultural and biological engineering DBER community for advancing educational practices and engaging in education research. To spur us on, this article uses specific examples of new frontiers in each of the five areas of engineering education research couched in an overarching theme of authentic learning. The five areas of engineering education research are learning systems, diversity and inclusiveness, learning mechanisms, assessment, and epistemologies. Authentic learning refers to learning strategies employed in the classroom that mimic real-world problem-solving in that the learning tasks involve something a practitioner would actually do, application of higher-level thinking skills, natural social interactions in a community, and opportunities for students to direct their own learning. This pedagogical model is selected because most education research with the potential to transform engineering education can be linked to closing the gap between school learning and work in the field.
In the following sections, each area of engineering education research is briefly described. Potentially transformative educational practices or trends in the growing knowledge-base that typifies that area are presented and linked to authentic learning. Potential impacts for agricultural and biological engineering conclude the discussion of each research area.
--Discipline-based education research can provide unique insights for agricultural and biological engineering. --Authentic learning has the potential to transform teaching practices and student learning. --Work in the five areas of engineering education research provides a foundation for discipline-specific inquiry.--An agricultural and biological engineering education research agenda is advised
Facilitating Teaching And Research On Open Ended Problem Solving Through The Development Of A Dynamic Computer Tool
Model Eliciting Activities (MEAs) are realistic open-ended problems set in engineering contexts; student teams draw on their diverse experiences both in and out of the classroom to develop a mathematical model explicated in a memo to the client. These activities have been implemented in a required first-year engineering course with enrollments of as many as 1700 students in a given semester. The earliest MEA implementations had student teams write a single solution to a problem in the form of a memo to the client and receive feedback from their TA. For research purposes, a simple static online submission form, a static feedback form, and a single database table were quickly developed. Over time, research revealed that students need multiple feedback, revision, and reflection points to address misconceptions and achieve high quality solutions. As a result, the toolset has been expanded, patched, and re-patched multiple developers to increase both the functionality and the security of the system. Because the class is so large and the implementation sequence involved is not trivial, the technology has become a necessary to successfully manage the implementation of MEAs in the course. The resulting system has become a kluge of bloated inflexible code that now requires a part time graduate student to manage the deployment of 2-4 MEAs per semester. New functions are desired but are either not compatible or are too cumbersome to implement under the existing architecture. Based on this, a new system is currently being developed to allow for greater flexibility, easier expandability, and expanded functionality. The largest feature-set being developed for the new system are the administrative tools to ease the deployment process. Other features being planned are the ability to have students upload files and images as part of their solution. This paper will describe the history of the MEA Learning System (MEALS) and the lessons learned about developing custom teaching and research software, and will explore how the development of custom software tools can be used to facilitate the dual roles of teaching and educational research
A case study of engineering instructor adaptability through evidence of course complexity changes
Use of a wide array of teaching practices and strategies has been shown to improve studentsâ conceptual understanding, appeal to a diverse set of students, and preparation for engineering work. Adaptability theory provides a lens for understanding changes instructors make and can be useful for conceptualizing faculty development going forward. How an instructorâs adaptability plays out in the face of new demands lies in the complexity of the courses they teach. Course complexity refers to both the extent of the array of teaching practices/strategies used in a course and the challenge to implement those practices/strategies. The purpose of this paper is to begin to examine what information is embedded in syllabi that may be used to quantify complexity via a Course Complexity Typology. This work is a case study of a single instructor-course pairing and their course syllabi from multiple semesters
Student Use of Anchors and Metacognitive Strategies in Reflection
Context: Self-regulation, a skillset involving taking charge of oneâs own learning processes, is crucial for workplace success. Learners develop self-regulation skills through reflection where they recognize weaknesses and strengths by employing metacognitive strategies: planning, monitoring, and evaluating. Use of anchors assists learnersâ engagement in reflection. Purpose or Goal: The purpose of this work was to gain insight into studentsâ use of anchors when reflecting on their learning. The two research questions: (1) To what extent do students link their self-evaluation and learning objective (LO) self-ratings to their reflections? and (2) What dimensions and level of metacognitive strategies do students use in their self-evaluation of and reflections on weekly problem-solving assignments? Methods: Data were upper-division engineering studentsâ anchors (self- evaluations, LO self-ratings) and reflection responses for one assignment. Self-evaluations and reflections were analyzed for the presence of references to LOs. The number of students who linked the anchors to their reflection were tabulated. Additionally, a revised a priori coding scheme was applied to studentsâ written work to determine type and level of metacognitive strategies employed. Outcomes: Few students linked both anchors to their reflections. Students employed low to medium levels of the metacognitive strategies in their self-evaluations and reflections, even when they linked their anchors and reflections. The evaluating strategy dominated in the self- evaluations, while planning and monitoring dominated in the reflections. Conclusion: Students have limited understanding of the use of anchors to guide their reflection responses. Students overall level of engagement in the metacognitive strategies indicates a need for formal instruction on reflection
Reflection Types and Studentsâ Viewing of Feedback in a First-Year Engineering Course Using Standards-Based Grading
Background: Feedback is one of the most powerful and essential tools for learning and assessment, particularly when it provides the information necessary to close an existing gap between actual and reference levels of performance. The literature on feedback has primarily focused on addressing strategies for providing effective feedback rather than aspects of studentsâ readiness to engage with feedback. Purpose/Hypothesis: This study investigated whether reflection, as a routine pedagogical intervention grounded in self-regulated learning theory, promotes the frequency with which students view feedback. Design/Method: A quasi-experimental design was employed to examine the relationship between the use of four different reflection types, as well as no reflection, and studentsâ feedback viewing behaviors in a first-year engineering course that used standards-based grading. Clickstream data were gathered through the learning management system to count the number of times students viewed feedback. The number of feedback views was compared by reflection type using descriptive statistics and a generalized linear model; weekly feedback viewing patterns were examined using time-descriptive graphs and the time-series cluster analysis. Results: Findings suggest reflection has the potential to increase the frequency of feedback views. Reflection not only had a positive and significant effect on the number of times students viewed feedback but also resulted in less variability between course sections and instructors when structured reflections made explicit references to feedback. Conclusions: Students need feedback to learn effectively, but many do not view feedback without formal prompting. The authors recommend instructors consistently administer reflections that include explicit pointers to feedback throughout the semester
First-Year Effects Of An Engineering Professional Development Program On Elementary Teachers
The ultimate objective of teacher professional development (TPD) is to deliver a positive impact on studentsâ engagement and performance in class through teacher practice via improving their content and pedagogical content knowledge and changing their attitudes toward the subject being taught. However, compared to other content areas, such as mathematics and science, relatively few engineering TPD programs have been developed, and there has been a lack of research on the effective practice of TPD for K-12 engineering education. As a part of a five-year longitudinal project, this study reports the first-year effect of TPD offered by the Institute for P-12 Engineering Research and Learning (INSPIRE) at Purdue University on elementary teachers integrating engineering. Thirty-two teachers of second through fourth grade from seven schools attended a one-week intensive Summer Academy and integrated engineering lessons throughout the year. Based on a pre- and post-test research design, multiple measures were utilized to examine changes in teachersâ knowledge and perceptions of engineering and their variations in knowledge and perceptions by school and teacher characteristics. Overall, teachers were satisfied with the engineering TPD program, significantly increased their engineering design process knowledge, and became more familiar with engineering. While teachersâ knowledge about engineering did not vary by school and teacher characteristics, some aspects of teachersâ perceptions regarding engineering integration and their practice differed by school and teacher characteristics.
Selecting Effective Examples to Train Students for Peer Review of OpenâEnded Problem Solutions
Background Students conducting peer review on authentic artifacts require training. In the training studied here, individual students reviewed (score and provide feedback on) a randomly selected prototypical solution to a problem. Afterwards, they are shown a side-by-side comparison of their review and an expertâs review, along with prompts to reflect on the differences and similarities. Individuals were then assigned a peer teamâs solution to review.
Purpose This paper explores how the characteristics of five different prototypical solutions used in training (and their associated expert evaluations) impacted studentsâ abilities to score peer teamsâ solutions.
Design/Method An expert rater scored the prototypical solutions and 147 student teamsâ solutions that were peer reviewed using an eight item rubric. Differences between the scores assigned by the expert and a student to a prototypical solution and an actual team solution were used to compute a measure of the studentâs improvement as a peer reviewer from training to actual peer review. ANOVA testing with Tukeyâs post-hoc analysis was done to identify statistical differences in improvement based on the prototypical solutions students saw during the training phase.
Results Statistically significant differences were found in the amount of error a student made during peer review between high and low quality prototypical solutions seen by students during training. Specifically, a lower quality training solution (and associated expert evaluation) resulted in more accurate scoring during peer review.
Conclusions While students typically ask to see exemplars of âgood solutionsâ, this research suggests that there is likely greater value, for the purpose of preparing students to score peersâ solutions, in students seeing a low-quality solution and its corresponding expert review
Meaningful Learner Information for MOOC Instructors Examined Through a Contextualized Evaluation Framework
Improving STEM MOOC evaluation requires an understanding of the current state of STEM MOOC evaluation, as perceived by all stakeholders. To this end, we investigated what kinds of information STEM MOOC instructors currently use to evaluate their courses and what kinds of information they feel would be valuable for that purpose. We conducted semi-structured interviews with 14 faculty members from a variety of fields and research institutions who had taught STEM MOOCs on edX, Coursera, or Udacity. Four major themes emerged related to instructorsâ desires: (1) to informally assess learners as an instructor might in a traditional classroom, (2) to assess learnersâ attainment of personal learning goals, (3) to obtain in-depth qualitative feedback from learners, and (4) to access more detailed learner analytics regarding the use of course materials. These four themes contribute to a broader sentiment expressed by the instructors that they have access to a wide variety of quantitative data for use in evaluation but are largely missing the qualitative information that plays a significant role in traditional evaluation. Finally, we provide our recommendations for MOOC evaluation criteria, based on these findings
Characteristics of Feedback that Influence Student Confidence and Performance during Mathematical Modeling
This study focuses on characteristics of written feedback that influence studentsâ performance and confidence in addressing the mathematical complexity embedded in a Model-Eliciting Activity (MEA). MEAs are authentic mathematical modeling problems that facilitate studentsâ iterative development of solutions in a realistic context. We analyzed 132 first-year engineering studentsâ confidence levels and mathematical model scores on aMEA(pre and post feedback), along with teaching assistant feedback given to the students. The findings show several examples of affective and cognitive feedback that students reported that they used to revise their models. Studentsâ performance and confidence in developing mathematical models can be increased when they are in an environment where they iteratively develop models based on effective feedback
Undergraduate and Graduate Teaching Assistants\u27 Perceptions of Their Responsibilities - Factors that Help or Hinder
Effective teaching assistants (TAs) are crucial for effective student learning. This is especially true in science, technology, engineering, and mathematics (STEM) programs, where TAs are enabling large programs to transition to more student-centered learning environments. To ensure that TAs are able to support these types of learning environments, their perspectives of training, their abilities, and other work related aspects must be understood. In this paper a survey that was created based on interviews conducted with eight TAs is discussed. The survey has four primary categories of content that are critical for understanding TAs\u27 perspectives: (1) background, (2) motivation, (3) training, and (4) grading and feedback. This research team is first utilizing this survey at Purdue University to test for validity and reliability of the instrument, as well as identifying ways to improve the experiences and effectiveness of the First-Year Engineering Program\u27s TAs\u27 support system, training, hiring process, and any other relevant components of the infrastructure. The more generalizable goal of this research is to further develop this survey to be used by any STEM program as a diagnostic tool for identifying opportunities to enhance the TA support systems and therefore improve student learning