7 research outputs found

    Diagnosing undergraduate biology students\u27 experimental design knowledge and difficulties

    Get PDF
    Experimental design is an important component of undergraduate biology education as it generates knowledge of biology. This dissertation addresses the challenge undergraduate educators face for assessing knowledge of experimental design in biology by examining knowledge of, and difficulties with, experimental design in the context of first-year undergraduate biology students at Purdue. The first chapter reviews several recent reports that highlight the necessity to increase understanding of the experimental research process as a core scientific ability (for e.g., AAAS, 2011; AAMC-HHMI, 2009; NRC, 2007). Despite its importance, there is limited information about what students actually learn from designing experiments. In the second chapter, the development and validation of a Rubric for Experimental Design (RED) was informed by a literature review and empirical analysis of thousands of undergraduate biology students\u27 responses to three published assessments. The RED is a useful probe for five major areas of experimental design abilities: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. The third chapter presents an original \u27Neuron Assessment\u27 based on a current research problem related to a disease caused by defective movement of mitochondria in neurons. This assessment provides necessary background information and figures to examine knowledge of experiments through representations and experimental design concepts. A case study method was conducted with oral interviews to investigate interactions among three factors, conceptual knowledge (C), reasoning skills (R) and modes of representation (M). Findings indicate the usefulness of the \u27Neuron Assessment\u27 to probe knowledge and difficulties in areas characterized by RED. The fourth chapter examines evidence from the case study participants\u27 written responses to paper and pencil tests to validate the \u27Neuron Assessment\u27 as a diagnostic tool for the RED areas. In comparison to the published assessments that formed the basis for development of RED, findings with the \u27Neuron Assessment\u27 provide strong evidence for its validity as a probe to distinguish expert and student knowledge from difficulties with experimentation concepts and representations. In summary, a mixed methods approach was used to characterize undergraduate biology students\u27 knowledge and difficulties with experimental design. Findings from this dissertation illuminate knowledge of experimental design at the undergraduate level and open up several new avenues for improved teaching and research on how to evaluate learning about the experimental basis for understanding biological phenomena

    Guidelines to Avoid Typical Difficulties According to the Rubric for Experimental Design (RED)

    Get PDF
    Experimental design is an important component of undergraduate biology education as it generates knowledge of biology. Despite its importance, there is limited information about what students actually learn from designing experiments. Dasgupta et al (2014) reported on the development and validation of a Rubric for Experimental Design (RED), informed by a literature review and empirical analysis of thousands of undergraduate biology students’ responses to three published assessments. The RED is a useful probe for five major areas of experimental design abilities: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. This handout puts the RED into a format that is useful for students

    Development and Validation of a Rubric for Diagnosing Students\u27 Experimental Design Knowledge and Difficulties

    Get PDF
    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design

    Development of the Neuron Assessment for Measuring Biology Students’ Use of Experimental Design Concepts and Representations

    Get PDF
    Researchers, instructors, and funding bodies in biology education are unanimous about the importance of developing students’ competence in experimental design. Despite this, only limited measures are available for assessing such competence development, especially in the areas of molecular and cellular biology. Also, existing assessments do not measure how well students use standard symbolism to visualize biological experiments. We propose an assessment-design process that 1) provides background knowledge and questions for developers of new “experimentation assessments,” 2) elicits practices of representing experiments with conventional symbol systems, 3) determines how well the assessment reveals expert knowledge, and 4) determines how well the instrument exposes student knowledge and difficulties. To illustrate this process, we developed the Neuron Assessment and coded responses from a scientist and four undergraduate students using the Rubric for Experimental Design and the Concept-Reasoning Mode of representation (CRM) model. Some students demonstrated sound knowledge of concepts and representations. Other students demonstrated difficulty with depicting treatment and control group data or variability in experimental outcomes. Our process, which incorporates an authentic research situation that discriminates levels of visualization and experimentation abilities, shows potential for informing assessment design in other disciplines

    Board 121: Using Tutor-led Support to Enhance Engineering Student Writing for All

    Get PDF
    Writing Assignment Tutor Training in STEM (WATTS) is part of a three-year NSF IUSE grant with participants at three institutions. This research project seeks to determine to what extent students in the WATTS project show greater writing improvement than students using writing tutors not trained in WATTS. The team collected baseline, control, and experimental data. Baseline data included reports written by engineering and engineering technology students with no intervention to determine if there were variations in written communication related to student demographics and institutions. Control data included reports written by students who visited tutors with no WATTS training, and experimental data included reports written by students who visited tutors who were WATTS-trained. Reports were evaluated by the research team using a slightly modified version of the American Association of Colleges and Universities (AAC&U) Written Communication VALUE Rubric. Baseline data assessment also provided an opportunity to test the effectiveness of the rubric. This paper presents findings from the analysis of the control and experimental data to determine the impact of WATTS on student writing in lab reports. An aggregate score for each lab report was determined by averaging the reviewer scores. An analysis was run to determine if there was a statistical difference between pre-tutoring lab report scores from the baseline, control, and experimental rubric scores for each criterion and total scores; there was not a statistically significant difference. The research team ran a Wilcoxon signed-rank test to assess the relationship between control and experimental aggregate rubric scores for each criterion. The preliminary analysis of the control and experimental data shows that the WATTS intervention has a positive, statistically significant impact on written communication skills regardless of the campus student demographics. Since WATTS has been shown to be a low-cost, effective intervention to improve engineering and engineering technology students’ written communication skills at these participating campuses, it has potential use for other institutions to positively impact their students’ written communication

    Board 317: Improving Undergraduate STEM Writing: A Collaboration Between Instructors and Writing Center Directors to Improve Peer-Writing Tutor Feedback

    Get PDF
    Undergraduate STEM writing skills, especially in engineering fields, need improvement. Yet students in engineering fields often do not value writing skills and underestimate the amount of writing they will do in their careers. University writing centers can be a helpful resource, but peer writing tutors need to be prepared for the differences between writing for the humanities and writing in STEM fields. The Writing Assignment Tutor Training in STEM (WATTS) model is designed to improve tutor confidence and student writing. In this innovative training, the writing center supervisor collaborates with the STEM instructor to create a one-hour tutor-training where the tutors learn about the assignment content, vocabulary, and expectations. This multidisciplinary collaborative project builds on previous investigative work to determine the impact of WATTS on students, tutors, and faculty and to identify its mitigating and moderating effects. Data has been collected and analyzed from pre- and post- training surveys, interviews, and focus groups. In addition, the project studies WATTS effects on student writing pre- and post-tutoring. The team will use these results to develop a replicable, sustainable model for future expansion to other institutions and fields. By systematically collecting data and testing WATTS, the investigators will be able to identify its mitigating and moderating effects on different stakeholders and contribute valuable knowledge to STEM fields. This approach assesses the elements of the model that have the most impact and the extent to which WATTS can be used to increase collaboration between engineering instructors and writing centers. The project enables the investigators to expand WATTS to additional engineering courses, test key factors with more instructors, refine the process, and position WATTS for dissemination to a broad audience. As the cost of higher education rises, institutions are pressured to graduate students in four years and engineering curricula are becoming more complex. WATTS presents an economical, effective method to improve student writing in the discipline. Several factors indicate that it has the potential for broad dissemination and impact and will provide a foundation for a sustainable model for future work, as instructors become trainers for their colleagues, allowing additional ongoing expansion and implementation. WATTS serves as a model for institutions (large or small) to capitalize on existing infrastructure and resources to achieve large-scale improvements to undergraduate STEM writing while increasing interdisciplinary collaboration and institutional support

    Replication of a Tutor-Training Method for Improving Interaction Between Writing Tutors and Stem Students

    Get PDF
    The improvement of tutor training programs can impact the important work of writing centers. Tutors often feel less comfortable tutoring in genres different from their own discipline. A previous study introduced an assignment-specific tutor training model to improve writing center tutoring sessions between engineering students and writing tutors. The results of the previous study indicated a valuable addition to the resources available for engineering students. This model has now been replicated at two universities to assess the potential for wider dissemination. Preliminary data analysis suggests a relationship between initial tutor rating of student work, student perceptions of tutoring, and tutor perception of student engagement in the tutorial. Plans for future research include continued replication and expansion to test larger sample sizes, analysis of impact within and adaptations for other STEM areas, and continued study of the impact on tutoring team projects
    corecore